perm filename AI.V2[BB,DOC] blob sn#781782 filedate 1985-01-13 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00186 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00023 00002	This is Volume 2 of the AI-List digests.
C00024 00003	∂03-Jan-84  1823	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #1 
C00041 00004	∂04-Jan-84  2049	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #2 
C00072 00005	∂05-Jan-84  1502	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #3 
C00101 00006	∂05-Jan-84  1939	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #4 
C00128 00007	∂09-Jan-84  1641	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #5 
C00142 00008	∂10-Jan-84  1336	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #6 
C00164 00009	∂16-Jan-84  2244	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #7 
C00190 00010	∂17-Jan-84  2348	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #8 
C00214 00011	∂22-Jan-84  1625	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #9 
C00238 00012	∂30-Jan-84  2209	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #10
C00266 00013	∂02-Feb-84  0229	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #11
C00291 00014	∂03-Feb-84  2358	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #12
C00311 00015	∂05-Feb-84  0007	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #13
C00328 00016	∂11-Feb-84  0005	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #14
C00357 00017	∂11-Feb-84  0121	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #15
C00379 00018	∂11-Feb-84  0215	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #16
C00414 00019	∂11-Feb-84  2236	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #17
C00445 00020	∂11-Feb-84  2320	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #18
C00466 00021	∂15-Feb-84  2052	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #19
C00486 00022	∂22-Feb-84  1137	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #20
C00515 00023	∂22-Feb-84  1758	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #21
C00539 00024	∂29-Feb-84  1547	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #22
C00565 00025	∂29-Feb-84  1645	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #23
C00586 00026	∂06-Mar-84  1159	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #24
C00614 00027	∂06-Mar-84  1305	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #25
C00639 00028	∂06-Mar-84  1615	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #26
C00672 00029	∂07-Mar-84  1632	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #27
C00698 00030	∂09-Mar-84  2228	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #28
C00720 00031	∂09-Mar-84  2324	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #29
C00742 00032	∂12-Mar-84  1023	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #30
C00767 00033	∂13-Jan-85  1624	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #31
C00793 00034	∂16-Mar-84  1247	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #32
C00818 00035	∂18-Mar-84  2328	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #33
C00852 00036	∂22-Mar-84  1127	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #34
C00881 00037	∂26-Mar-84  1241	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #35
C00905 00038	∂29-Mar-84  0017	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #36
C00929 00039	∂29-Mar-84  1401	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #37
C00953 00040	∂29-Mar-84  2317	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #38
C00970 00041	∂31-Mar-84  1655	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #39
C00990 00042	∂03-Apr-84  2054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #40
C01018 00043	∂03-Apr-84  2141	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #41
C01040 00044	∂04-Apr-84  1707	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #42
C01060 00045	∂05-Apr-84  2050	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #43
C01085 00046	∂07-Apr-84  2324	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #44
C01106 00047	∂13-Jan-85  1603	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #45
C01129 00048	∂13-Apr-84  1129	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #46
C01152 00049	∂15-Apr-84  1824	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #47
C01171 00050	∂16-Apr-84  1106	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #48
C01191 00051	∂19-Apr-84  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #49
C01220 00052	∂21-Apr-84  1143	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #50
C01247 00053	∂22-Apr-84  1629	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #51
C01267 00054	∂24-Apr-84  2250	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #52
C01290 00055	∂28-Apr-84  1704	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #53
C01315 00056	∂03-May-84  1104	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #54
C01333 00057	∂04-May-84  2111	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #55
C01362 00058	∂07-May-84  1032	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #56
C01381 00059	∂08-May-84  2210	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #57
C01404 00060	∂14-May-84  1803	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #58
C01428 00061	∂20-May-84  2349	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #59
C01448 00062	∂21-May-84  0044	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #60
C01478 00063	∂21-May-84  1047	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #61
C01504 00064	∂22-May-84  2158	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #62
C01533 00065	∂25-May-84  0016	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #63
C01560 00066	∂25-May-84  1045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #64
C01583 00067	∂27-May-84  2229	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #65
C01607 00068	∂29-May-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #66
C01634 00069	∂31-May-84  2333	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #67
C01661 00070	∂01-Jun-84  1743	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #68
C01687 00071	∂13-Jan-85  1603	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #69
C01714 00072	∂05-Jun-84  2249	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #70
C01742 00073	∂06-Jun-84  2238	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #71
C01762 00074	∂10-Jun-84  1607	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #72
C01787 00075	∂15-Jun-84  1345	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #73
C01816 00076	∂17-Jun-84  1531	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #74
C01838 00077	∂20-Jun-84  1154	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #75
C01864 00078	∂21-Jun-84  2327	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #76
C01885 00079	∂22-Jun-84  0657	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #77
C01911 00080	∂24-Jun-84  1136	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #78
C01938 00081	∂25-Jun-84  0021	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #79
C01972 00082	∂26-Jun-84  0054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #80
C01998 00083	∂28-Jun-84  1319	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #81
C02024 00084	∂28-Jun-84  1428	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #82
C02042 00085	∂05-Jul-84  2304	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #83
C02062 00086	∂05-Jul-84  2203	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #84
C02085 00087	∂06-Jul-84  1220	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #85
C02114 00088	∂07-Jul-84  1252	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #86
C02138 00089	∂10-Jul-84  2221	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #87
C02166 00090	∂11-Jul-84  1558	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #88
C02188 00091	∂12-Jul-84  1604	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #89
C02216 00092	∂13-Jul-84  2352	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #90
C02242 00093	∂16-Jul-84  0015	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #91
C02271 00094	∂17-Jul-84  2244	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #92
C02299 00095	∂18-Jul-84  1916	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #93
C02324 00096	∂21-Jul-84  1638	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #94
C02349 00097	∂25-Jul-84  0101	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #95
C02378 00098	∂26-Jul-84  1439	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #96
C02410 00099	∂27-Jul-84  2351	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #97
C02431 00100	∂01-Aug-84  1020	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #98
C02445 00101	∂02-Aug-84  1213	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #99
C02465 00102	∂04-Aug-84  0512	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #100    
C02494 00103	∂04-Aug-84  2220	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #101    
C02517 00104	∂08-Aug-84  1054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #102    
C02532 00105	∂10-Aug-84  0045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #103    
C02552 00106	∂12-Aug-84  1928	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #104    
C02572 00107	∂14-Aug-84  2357	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #105    
C02590 00108	∂19-Aug-84  1854	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #106    
C02610 00109	∂19-Aug-84  1951	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #107    
C02629 00110	∂21-Aug-84  1735	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #108    
C02655 00111	∂25-Aug-84  1857	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #109    
C02675 00112	∂24-Aug-84  1514	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #110    
C02700 00113	∂28-Aug-84  2259	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #111    
C02721 00114	∂31-Aug-84  1217	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #112    
C02742 00115	∂02-Sep-84  2241	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #113    
C02765 00116	∂05-Sep-84  1121	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #114    
C02792 00117	∂12-Sep-84  1416	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #115    
C02814 00118	∂12-Sep-84  1525	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #116    
C02831 00119	∂12-Sep-84  1650	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #117    
C02856 00120	∂13-Sep-84  2330	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #118    
C02879 00121	∂16-Sep-84  1655	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #119    
C02903 00122	∂19-Sep-84  1045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #120    
C02925 00123	∂19-Sep-84  2307	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #121    
C02946 00124	∂21-Sep-84  0034	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #122    
C02966 00125	∂23-Sep-84  1304	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #123    
C02991 00126	∂23-Sep-84  2339	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #124    
C03014 00127	∂26-Sep-84  0102	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #125    
C03035 00128	∂27-Sep-84  0258	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #126    
C03051 00129	∂28-Sep-84  0103	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #127    
C03069 00130	∂01-Oct-84  1132	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #128    
C03088 00131	∂02-Oct-84  1108	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #129    
C03112 00132	∂03-Oct-84  1218	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #130    
C03133 00133	∂06-Oct-84  1720	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #131    
C03166 00134	∂07-Oct-84  1054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #132    
C03186 00135	∂08-Oct-84  1204	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #133    
C03212 00136	∂09-Oct-84  0024	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #134    
C03243 00137	∂10-Oct-84  1509	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #135    
C03265 00138	∂11-Oct-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #136    
C03289 00139	∂13-Oct-84  0045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #137    
C03313 00140	∂14-Oct-84  2048	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #138    
C03340 00141	∂17-Oct-84  1249	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #139    
C03370 00142	∂18-Oct-84  0000	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #140    
C03396 00143	∂18-Oct-84  1240	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #141    
C03421 00144	∂19-Oct-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #142    
C03446 00145	∂20-Oct-84  2331	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #143    
C03469 00146	∂24-Oct-84  1337	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #144    
C03497 00147	∂27-Oct-84  2326	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #145    
C03523 00148	∂28-Oct-84  0029	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #146    
C03549 00149	∂31-Oct-84  0030	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #147    
C03575 00150	∂01-Nov-84  1138	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #148    
C03600 00151	∂05-Nov-84  1145	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #149    
C03618 00152	∂07-Nov-84  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #150    
C03648 00153	∂09-Nov-84  1308	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #151    
C03680 00154	∂11-Nov-84  0004	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #152    
C03705 00155	∂11-Nov-84  2334	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #153    
C03738 00156	∂15-Nov-84  0022	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #154    
C03762 00157	∂15-Nov-84  0125	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #155    
C03785 00158	∂15-Nov-84  2000	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #156    
C03814 00159	∂18-Nov-84  1358	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #157    
C03844 00160	∂21-Nov-84  1306	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #158    
C03867 00161	∂21-Nov-84  2341	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #159    
C03893 00162	∂24-Nov-84  1543	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #160    
C03915 00163	∂25-Nov-84  1736	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #161    
C03947 00164	∂28-Nov-84  1620	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #162    
C03978 00165	∂29-Nov-84  1254	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #163    
C04008 00166	∂30-Nov-84  0005	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #164    
C04037 00167	∂01-Dec-84  2350	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #165    
C04062 00168	∂06-Dec-84  1139	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #166    
C04091 00169	∂02-Dec-84  1843	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #167    
C04126 00170	∂02-Dec-84  2016	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #168    
C04146 00171	∂02-Dec-84  2145	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #169    
C04164 00172	∂04-Dec-84  0104	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #170    
C04189 00173	∂06-Dec-84  1355	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #171    
C04212 00174	∂06-Dec-84  1853	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #172    
C04238 00175	∂08-Dec-84  0032	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #173    
C04268 00176	∂08-Dec-84  2332	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #174    
C04297 00177	∂11-Dec-84  1203	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #175    
C04322 00178	∂13-Dec-84  1448	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #176    
C04351 00179	∂13-Dec-84  1927	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #177    
C04376 00180	∂16-Dec-84  1507	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #178    
C04405 00181	∂19-Dec-84  1435	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #179    
C04430 00182	∂21-Dec-84  1303	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #180    
C04461 00183	∂21-Dec-84  1814	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #181    
C04488 00184	∂26-Dec-84  0122	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #182    
C04513 00185	∂31-Dec-84  1338	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #183    
C04534 00186	∂04-Jan-85  2250	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #184    
C04555 ENDMK
C⊗;
This is Volume 2 of the AI-List digests.

The digests are edited by Ken Laws from SRI.
To get added to the list send mail to AIList-REQUEST@SRI-AI or better yet
read the current digests in the file AI.TXT[2,2].
Mail your submissions  to AIList@SRI-AI.
Vol. 1 is in file AI.V2[2,2].
∂03-Jan-84  1823	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #1 
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Jan 84  18:23:22 PST
Date: Tue  3 Jan 1984 15:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #1
To: AIList@SRI-AI


AIList Digest           Wednesday, 4 Jan 1984       Volume 2 : Issue 1

Today's Topics:
  Administrivia - Host List & VISION-LIST,
  Cognitive Psychology - Looping Problem,
  Programming Languages - Questions,
  Logic Programming - Disjunctions,
  Vision - Fiber Optic Camera
----------------------------------------------------------------------

Date: Tue 3 Jan 84 15:07:27-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Host List

The AIList readership has continued to grow throughout the year, and only
a few individuals have asked to be dropped from the distribution network.
I cannot estimate the number of readers receiving AIList through bboards
and remailing nodes, but the existence of such services has obviously
reduced the outgoing net traffic.  For those interested in such things,
I present the following approximate list of host machines on my direct
distribution list.  Numbers in parentheses indicate individual subscribers;
all other hosts (and those marked with "bb") have redistribution systems.
A few of the individual subscribers are undoubtedly redistributing
AIList to their sites, and a few redistribution nodes receive the list
from other such nodes (e.g., PARC-MAXC from RAND-UNIX).  AIList is
also available to USENET through the net.ai distribution system.

    AEROSPACE(8), AIDS-UNIX, BBNA(2), BBNG(1), BBN-UNIX(8), BBN-VAX(3),
    BERKELEY(3), BITNET@BERKELEY(2), ONYX@BERKELEY(1), UCBCAD@BERKELEY(2),
    BRANDEIS(1), BRL(bb+1), BRL-VOC(1), BROWN(1), BUFFALO-CS(1),
    cal-unix@SEISMO(1), CIT-20, CMU-CS-A(bb+11) CMU-CS-G(3),
    CMU-CS-SPICE(1), CMU-RI-ISL1(1), COLUMBIA-20, CORNELL,
    DEC-MARLBORO(7), EDXA@UCL-CS(1), GATECH, HI-MULTICS(bb+1),
    CSCKNP@HI-MULTICS(2), SRC@HI-MULTICS(1), houxa@UCLA-LOCUS(1),
    HP-HULK(1), IBM-SJ(1), JPL-VAX(1), KESTREL(1), LANL, LLL-MFE(2),
    MIT-MC, NADC(2), NOSC(4), NOSC-CC(1), CCVAX@NOSC(3), NPRDC(2),
    NRL-AIC, NRL-CSS, NSF-CS, NSWC-WO(2), NYU, TYM@OFFICE(bb+2),
    RADC-Multics(1), RADC-TOPS20, RAND-UNIX, RICE, ROCHESTER(2),
    RUTGERS(bb+2), S1-C(1), SAIL, SANDIA(bb+1), SCAROLINA(1),
    sdcrdcf@UCBVAX(1), SRI-AI(bb+6), SRI-CSL(1), SRI-KL(12), SRI-TSC(3),
    SRI-UNIX, SU-AI(2), SUMEX, SUMEX-AIM(2), SU-DSN, SU-SIERRA@SU-DSN(1),
    SUNY-SBCS(1), SU-SCORE(11), SU-PSYCH@SU-SCORE(1), TEKTRONIX(1), UBC,
    UCBKIM, UCF-CS, UCI, UCL-CS, UCLA-ATS(1), UCLA-LOCUS(bb+1),
    UDel-Relay(1), UIUC, UMASS-CS, UMASS-ECE(1), UMCP-CS, UMN-CS(bb+1),
    UNC, UPENN, USC-ECL(7), USC-CSE@USC-ECL(2), USC-ECLD@USC-ECL(1),
    SU-AI@USC-ECL(4), USC-ECLA(1), USC-ECLB(2), USC-ECLC(2), USC-ISI(5),
    USC-ISIB(bb+6), USC-ISID(1), USC-ISIE(2), USC-ISIF(10), UTAH-20(bb+2),
    utcsrgv@CCA-UNIX(1), UTEXAS-20, TI@UTEXAS-20(1), WISC-CRYS(3),
    WASHINGTON(4), YALE

                                        -- Ken Laws

------------------------------

Date: Fri, 30 Dec 83 15:20:41 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: Are you interested in a more specialized "VISION-LIST"?

        I been feeling frustrated (again).  I really like AIList,
since it provides a nice forum for general AI topics.  Yet, like
many of you out there, I am primarily a vision researcher looking into
ways to facilitate machine vision and trying to decipher the strange,
all-too-often unknown mechanisms of sight.  What we need is a
specialized VISION-LIST to provide a more specific forum that will
foster a greater exchange of ideas among our research.
So...one question and one request:  1) is there such a list in the
works?, and  2) if you are interested in such a list PLEASE SPEAK UP!!

                        Thanks!
                        Philip Kahn
                        UCLA

------------------------------

Date: Fri 30 Dec 83 11:04:17-PST
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: Loop detection

Mike,
        It seems to me that we have an inbuilt mechanism which remembers
what is done (thought) at all times. I.E. we know and remember (more or
less) our train of thoughts. When we get in a loop, the mind is
immediately triggered : at the first element, we think it could be a
coincidence, as more elements are found matching the loop, the more
convinced we get that there is a repeat : the reading example is quite
good , even when just one word appears in the same sentence context
(meaning rather than syntactical context), my mind is triggered and I go
back and check if there is actually a loop or not. Thus to implement this
property in the computer we would need a mechanism able to remember the
path and check whether it has been followed already or not (and how
far), at each step. Detection of repeats of logical rather than word for
word sentences (or sets of ideas) is still left open.
        I think that the loop detection mechanism is part of the
memorization process, which is an integral part of the reasoning engine
and it is not sitting "on top" and monitoring the reasoning process from
above.

Rene

------------------------------

Date: 2 January 1984 14:40 EST
From: Herb Lin <LIN @ MIT-ML>
Subject: stupid questions....

Speaking as an interested outsider to AI, I have a few questions that
I hope someone can answer in non-jargon.  Any help is greatly appreciated:

1. Just why is a language like LISP better for doing AI stuff than a
language like PASCAL or ADA?  In what sense is LISP "more natural" for
simulating cognitive processes?  Why can't you do this in more tightly
structured languages like PASCAL?

2. What is the significance of not distinguishing between data and
program in LISP?  How does this help?

3. What is the difference between decisions made in a production
system (as I understand it, a production is a construct of the form IF
X is true, then do Y, where X is a condition and Y is a procedure),
and decisions made in a PASCAL program (in which IF statements also
have the same (superficial) form).


many thanks.

------------------------------

Date: 1 Jan 84 1:01:50-PST (Sun)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: Re: a trivial reasoning problem? - (nf)
Article-I.D.: fortune.2135

Gee, and to a non-Prolog person (me) your problem seemed so simple
(even given the no-exhaustive-search rule). Let's see,

        1. At least one of A or B is on = (A v B)
        2. If A is on, B is not         = (A -> ~B) = (~A v (~B)) [def'n of ->]
        3. A and B are binary conditions.

>From #3, we are allowed to use first-order Boolean algebra (WFF'n'PROOF game).
(That is, #3 is a meta-condition.)

So, #1 and #2 together is just (#1) ↑ (#2) [using caret ↑ for disjunction]

or,             #1 ↑ #2 = (A v B) ↑ (~A v ~B)
(distributivity)        = (A ↑ ~A) v (A ↑ ~B) v (B ↑ ~A) v (B ↑ ~B)
(from #3 and ↑-axiom)   = (A ↑ ~B) v (B ↑ ~A)
(def'n of xor)          = A xor B

Hmmm... Maybe I am missing your original question altogether. Is your real
question "How does one enumerate the elements of a state-space (powerset)
for which a certain logical proposition is true without enumerating (examining)
elements of the state-space for which the proposition is false?"?

To me (an ignorant "non-ai" person), this seems excluded by a version of the
First Law of Thermodynamics, namely, the Law of the Excluded Miraculous Sort
(i.e. to tell which of two elements is bigger, you have to look at both).

It seems to me that you must at least look at SOME of the states for which the
proposition is false, or equivalently, you must use the structure of the
formula itself to do the selection (say, while doing a tree-walk). The problem
of the former approach is that the number of "bad" states should be kept
small (for efficiency), leading to all kinds of pruning heuristics; while
for the latter method the problem of elimination of duplicates (assuming
parallel processing) leads to the former method!

In either case, however, reasoning about the variables does not seem to
solve the problem; one must reason about the formulae. If Prolog admits
of constructing such meta-rules, you may have a chance. (I.e., "For all
true formula 'X xor Y', only X need be considered when ~Y, & v-v.)

In any event, I think your problem can be simplified to:

        1'. A xor B
        2'. A, B are binary variables.


Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

Date: 28 Dec 83 4:01:48-PST (Wed)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: REFERENCES FOR SPECIALIZED CAMERA DE - (nf)
Article-I.D.: fortune.2114

Please clarify what you mean by "get close to the focal point of the
optical system". For any lens system I've used (both cameras and TVs),
the imaging surface (the film or the sensor) already IS at the focal point.
As I recall, the formula (for convex lenses) is:

         1     1     1
        --- = --- + ---
         f    obj   img

where "f" is the focal length of the lens, "obj" the distance to the "object",
and "img" the distance to the (real) image. Solving for minimum "obj + img",
the closest you can get a focused image to the object (using a lens) is 4*f,
with the lens midway between the object and the image (1/f = 1/2f + 1/2f).

Not sure what a bundle of fibers would do for you, since without a lens each
fiber picks up all the light around it within a cone of its numerical
aperture (NA). Some imaging systems DO use fiber bundles directly in contact
with film, but that's generally going the other way (from a CRT to film).
I think Tektronix has a graphics output device like that. I suppose you
could use it if the object were self-luminous...

Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

End of AIList Digest
********************

∂04-Jan-84  2049	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #2 
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Jan 84  20:47:43 PST
Date: Wed  4 Jan 1984 16:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #2
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 2

Today's Topics:
  Hardware - High Resolution Video Projection,
  Programming Languages - LISP vs. Pascal,
  Net Course - AI and Mysticism
----------------------------------------------------------------------

Date: 04 Jan 84  1553 PST
From: Fred Lakin <FRD@SU-AI>
Subject: High resolution video projection

I want to buy a hi-resolution monochrome video projector suitable for use with
generic LISP machine or Star-type terminals (ie approx 1000 x 1000 pixels).
It would be nice if it cost less than $15K and didn't require expensive
replacement parts (like light valves).

Does anybody know of such currently on the market?

I know, chances seem dim, so on to my second point: I have heard it would be
possible to make a portable video projector that would cost $5K, weigh 25lb,
and project using monochrome green phosphor.  The problem is that industry
does not feel the market demand would justify production at such a price ...
Any ideas on how to find out the demand for such an item?  Of course if
all of you who might be interested in this kind of projector let me know
your suggestions, that would be a good start.

Thanks in advance for replies and/or notions,
Fred Lakin

------------------------------

Date: Wed 4 Jan 84 10:25:56-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: Re: stupid questions (i.e. Why Lisp?)

        You might want to read an article by Beau Sheil (Xerox PARC)
in the February '83 issue of Datamation called "Power tools for
programmers."  It is mostly about the Interlisp-D programming
environment, but might give you some insights about LISP in general.
        I'll offer three other reasons, though.
        Algol family languages lack the datatypes to conveniently
implement a large number of knowledge representation schemes.  Ditto
wrt. rules.  Try to imagine setting up a pascal record structure to
embody the rules "If I have less than half of a tank of gas then I
have as a goal stopping at a gas station" & "If I am carrying valuable
goods, then I should avoid highway bandits."  You could write pascal
CODE that sort of implemented the above, but DATA would be extremely
difficult.  You would almost have to write a lisp interpreter in
pascal to deal with it.  And then, when you've done that, try writing
a compiler that will take your pascal data structures and generate
native code for the machine in question!  Now, do it on the fly, as a
knowledge engineer is augmenting the knowledge base!
        Algol languages have a tedious development cycle because they
typically do not let a user load/link the same module many times as he
debugs it.  He typically has to relink the entire system after every
edit.  This prevents much in the way of incremental compilation, and
makes such languages tedious to debug in.  This is an argument against
the languages in general, and doesn't apply to AI explicitly.  The AI
community feels this as a pressure more, though, perhaps because it
tends to build such large systems.
        Furthermore, consider that most bugs in non-AI systems show up
at compile time.  If a flaw is in the KNOWLEDGE itself in an AI
system, however, the flaws will only show up in the form of incorrect
(unintelligent?) behavior.  Typically only lisp-like languages provide
the run-time tools to diagnose such problems.  In Pascal, etc, the
programmer would have to go back and explicitly put all sorts of
debugging hooks into the system, which is both time consuming, and is
not very clean.  --Christopher

------------------------------

Date: 4 Jan 84 13:59:07 EST
From: STEINBERG@RUTGERS.ARPA
Subject: Re: Herb Lin's questons on LISP etc.

Herb:
Those are hardly stupid questions.  Let me try to answer:

        1. Just why is a language like LISP better for doing AI stuff than a
        language like PASCAL or ADA?

There are two kinds of reasons.  You could argue that LISP is more
oriented towards "symbolic" processing than PASCAL.  However, probably
more important is the fact that LISP provides a truly outstanding
environment for exploratory programming, that is, programming where
you do not completely understand the problem or its solutions before
you start programming.  This is normally the case in AI programming -
even if you think you understand things you normally find out there
was at least something you were wrong about or had forgotten.  That's
one major reason for actually writing the programs.

Note that I refer to the LISP environment, not just the language.  The
existence of good editors, debuggers, cross reference aids, etc. is at
least as important as the language itself.  A number of features of LISP
make a good environment easy to provide for LISP.  These include the
compatible interpreter/compiler, the centrality of function calls, and the
simplicity and accessibility of the internal representation of programs.

For a very good introduction to the flavor of programming in LISP
environments, see "Programming in an Interactive Environment, the LISP
Experience", by Erik Sandewall, Computing Surveys, V. 10 #1, March 1978.

        2. What is the significance of not distinguishing between data
        and program in LISP?  How does this help?

Actually, in ANY language, the program is also data for the interpreter
or compiler.  What is important about LISP is that the internal form used
by the interpreter is simple and accessible.  It is simple in that the
the internal form is a structure of nested lists that captures most of
both the syntactic and the semantic structure of the code.  It is accessible
in that this structure of nested lists is in fact a basic built in data
structure supported by all the facilities of the system, and in that a
program can access or set the definition of a function.

Together these make it easy to write programs which operate on other programs.
E.g.  to add a trace feature to PASCAL you have to modify the compiler or
interpreter.  To add a trace feature to LISP you need not modify the
interpreter at all.

Furthermore, it turns out to be easy to use LISP to write interpreters
for other languages, as long as the other languages use a similar
internal form and have a similarly simple relation between form and
semantics.  Thus, a common way to solve a problem in LISP is to
implement a language in which it is easy to express solutions to
problems in a general class, and then use this language to solve your
particular problem.  See the Sandewall article mentioned above.

        3. What is the difference between decisions made in a production
        system and decisions made in a PASCAL program (in which IF statements
        also have the same (superficial) form).

Production Systems gain some advantages by restricting the languages
for the IF and THEN parts.  Also, in many production systems, all
the IF parts are evaluated first, to see which are true, before any
THEN part is done.  If more than one IF part is true, some other
mechanism decides which THEN part (or parts) to do.  Finally, some
production systems such as EMYCIN do "backward chaining", that is, one
starts with a goal and asks which THEN parts, if they were done, would
be useful in achieving the goal.  One then looks to see if their
corresponding IF parts are true, or can be made true by treating them
as sub-goals and doing the same kind of reasoning on them.

A very good introduction to production systems is "An Overview of Production
Systems" by Randy Davis and Jonathan King, October 1975, Stanford AI Lab
Memo AIM-271 and Stanford CS Dept. Report STAN-CS-75-524.  It's probably
available from the National Technical Information Service.

------------------------------

Date: 1 Jan 84 8:42:34-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Netwide Course -- AI and Mysticism!!
Article-I.D.: psuvax.395

*************************************************************************
*                                                                       *
*            An Experiment in Teaching, an Experiment in AI             *
*       Spring Term Artificial Intelligence Seminar Announcement        *
*                                                                       *
*************************************************************************

This Spring term Penn State inaugurates a new experimental course:

        "THE HUMAN CONDITION: PROBLEMS AND CREATIVE SOLUTIONS".

This course explores all that makes the human condition so joyous and
delightful: learning, creative expression, art, music, inspiration,
consciousness, awareness, insight, sensation, planning, action, community.
Where others study these DESCRIPTIVELY, we will do so CONSTRUCTIVELY.  We
will gain familiarity by direct human experience and by building artificial
entities which manifest these wonders!!

We will formulate and study models of the human condition -- an organism of
bounded rationality confronting a bewilderingly complex environment.  The
human organism must fend for survival, but it is aided by some marvelous
mechanisms: perception (vision, hearing), cognition (understanding, learning,
language), and expression (motor skill, music, art).  We can view these
respectively as the input, processing, and output of symbolic information.
These mechanisms somehow encode all that is uniquely human in our experience
-- or do they??  Are these mechanisms universal among ALL sentient beings, be
they built from doped silicon or neural jelly?  Are these mechanisms really
NECESSARY and SUFFICIENT for sentience?

Not content with armchair philosophizing, we will push these models toward
the concreteness needed for physical implementation.  We will build the tools
that will help us to understand and use the necessary representations and
processes, and we will use these tools to explore the space of possible
realizations of "artificial sentience".

This will be no ordinary course.  For one thing, it has no teacher.  The
course will consist of a group of highly energetic individuals engaged in
seeking the secrets of life, motivated solely by the joy of the search
itself.  I will function as a "resource person" to the extent my background
allows, but the real responsibility for the success of the expedition rests
upon ALL of its members.

My role is that of "encounter group facilitator":  I jab when things lag.
I provide a sheltered environment where the shy can "come out" without
fear.  I manipulate and connive to keep the discussions going at a fever
pitch.  I pick and poke, question and debunk, defend and propose, all to
incite people to THINK and to EXPRESS.

Several people who can't be at Penn State this Spring told me they wish
they could participate -- so: I propose opening this course to the entire
world, via the miracles of modern networks!  We have arranged a local
mailing list for sharing discussions, source-code, class-session summaries,
and general flammage (with the chaff surely will be SOME wheat).  I'm aware
of three fora for sharing this: USENET's net.ai, Ken Laws' AIList, and
MIT's SELF-ORG mailing list.  PLEASE MAIL ME YOUR REACTIONS to using these
resources: would YOU like to participate? would it be a productive use of
the phone lines? would it be more appropriate to go to /dev/null?

The goals of this course are deliberately ambitious.  I seek participants
who are DRIVEN to partake in this journey -- the best, brightest, most
imaginative and highly motivated people the world has to offer.

Course starts Monday, January 16.  If response is positive, I'll post the
network arrangements about that time.

This course is dedicated to the proposition that the best way to secure
for ourselves the blessings of life, liberty, and the pursuit of happiness
is reverence for all that makes the human condition beautiful, and the
best way to build that reverence is the scientific study and construction
of the marvels that make us truly human.

--
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa:   bobgian%psuvax1.bitnet@Berkeley    Bitnet: bobgian@PSUVAX1.BITNET
CSnet:  bobgian@penn-state.csnet           UUCP:   allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: 1 Jan 84 8:46:31-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Netwide AI Course -- Part 2
Article-I.D.: psuvax.396

*************************************************************************
*                                                                       *
*         Spring Term Artificial Intelligence Seminar Syllabus          *
*                                                                       *
*************************************************************************


  MODELS OF SENTIENCE
    Learning, Cognitive Model Formation, Insight, Discovery, Expression;
    "Subcognition as Computation", "Cognition as Subcomputation";
    Physical, Cultural, and Intellectual Evolution.

      SYMBOLIC INPUT CHANNELS: PERCEPTION
        Vision, hearing, signal processing, the "signal/symbol interface".

      SYMBOLIC PROCESSING: COGNITION
        Language, Understanding, Goals, Knowledge, Reasoning.

      SYMBOLIC OUTPUT CHANNELS: EXPRESSION
        Motor skills, Artistic and Musical Creativity, Story Creation,
        Prose, Poetry, Persuasion, Beauty.

  CONSEQUENCES OF THESE MODELS
    Physical Symbol Systems and Godel's Incompleteness Theorems;
    The "Aha!!!" Phenomenon, Divine Inspiration, Extra-Sensory Perception,
    The Conscious/Unconscious Mind, The "Right-Brain/Left-Brain" Dichotomy;
    "Who Am I?", "On Having No Head"; The Nature and Texture of Reality;
    The Nature and Role of Humor; The Direct Experience of the Mystical.

  TECHNIQUES FOR DEVELOPING THESE ABILITIES IN HUMANS
    Meditation, Musical and Artistic Experience, Problem Solving,
    Games, Yoga, Zen, Haiku, Koans, "Calculus for Peak Experiences".

  TECHNIQUES FOR DEVELOPING THESE ABILITIES IN MACHINES

    REVIEW OF LISP PROGRAMMING AND FORMAL SYMBOL MANIPULATION:
      Construction and access of symbolic expressions, Evaluation and
      Quotation, Predicates, Function definition; Functional arguments
      and returned values; Binding strategies -- Local versus Global,
      Dynamic versus Lexical, Shallow versus Deep; Compilation of LISP.

    IMPLEMENTATION OF LISP:  Storage Mapping and the Free List;
      The representation of Data: Typed Pointers, Dynamic Allocation;
      Symbols and the Symbol Table (Obarray); Garbage Collection
      (Sequential and Concurrent algorithms).

    REPRESENTATION OF PROCEDURE:  Meta-circular definition of the
      evaluation process.

    "VALUES" AND THE OBJECT-ORIENTED VIEW OF PROGRAMMING: Data-Driven
      Programming, Message-Passing, Information Hiding; the MIT Lisp Machine
      "Flavor" system; Functional and Object-Oriented systems -- comparison
      with SMALLTALK.

    SPECIALIZED AI PROGRAMMING TECHNIQUES:  Frames and other Knowledge
      Representation Languages, Discrimination Nets, Augmented Transition
      Networks; Pattern-Directed Inference Systems, Agendas, Chronological
      Backtracking, Dependency-Directed Backtracking, Data Dependencies,
      Non-Monotonic Logic, and Truth-Maintenance Systems.

    LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS:
      Frames and other Knowledge Representation Languages, Discrimination
      Nets, "Higher" High-Level Languages:  PLANNER, CONNIVER, PROLOG.

  SCIENTIFIC AND ETHICAL CONSEQUENCES OF THESE ABILITIES IN HUMANS
  AND IN MACHINES
    The Search for Extra-Terrestrial Intelligence.
      (Would we recognize it if we found it?  Would they recognize us?)
    The Search for Terrestrial Intelligence.
    Are We Unique?  Are we worth saving?  Can we save ourselves?
    Why are we here?  Why is ANYTHING here?  WHAT is here?
    Where ARE we?  ARE we?  Is ANYTHING?


These topics form a cluster of related ideas which we will pursue more-or-
less concurrently; the listing is not meant to imply a particular sequence.

Various course members have expressed interest in the following software
engineering projects.  These (and possibly others yet to be suggested)
will run concurrently throughout the course:

    LISP Implementations:
      For CMS, in PL/I and/or FORTRAN
      In PASCAL, optimized for personal computers (esp HP 9816)
      In Assembly, optimized for Z80 and MC68000
      In 370 BAL, modifications of LISP 1.5

    New "High-Level" Systems Languages:
      Flavor System (based on the MIT Zetalisp system)
      Prolog Interpreter (plus compiler?)
      Full Programming Environment (Enhancements to LISP):
        Compiler, Editor, Workspace Manager, File System, Debug Tools

    Architectures and Languages for Parallel {Sub-}Cognition:
      Software and Hardware Alternatives to the Von-Neuman Computer
      Concurrent Processing and Message Passing systems

    Machine Learning and Discovery Systems:
      Representation Language for Machine Learning
      Strategy Learning for various Games (GO, CHECKERS, CHESS, BACKGAMMON)

    Perception and Motor Control Systems:
      Vision (implementations of David Marr's theories)
      Robotic Welder control system

    Creativity Systems:
      Poetry Generators (Haiku)
      Short-Story Generators

    Expert Systems (traditional topic, but including novel features):
      Euclidean Plane Geometry Teaching and Theorem-Proving system
      Welding Advisor
      Meteorological Analysis Teaching system


READINGS -- the following books will be very helpful:

    1.  ARTIFICIAL INTELLIGENCE, Patrick H. Winston; Addison Wesley, 1984.

    2.  THE HANDBOOK OF ARTIFICIAL INTELLIGENCE, Avron Barr, Paul Cohen, and
    Edward Feigenbaum; William Kaufman Press, 1981 and 1982.  Vols 1, 2, 3.

    3.  MACHINE LEARNING, Michalski, Carbonell, and Mitchell; Tioga, 1983.

    4.  GODEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Douglas R. Hofstadter;
    Basic Books, 1979.

    5.  THE MIND'S I, Douglas R. Hofstadter and Daniel C. Dennett;
    Basic Books, 1981.

    6.  LISP, Patrick Winston and Berthold K. P. Horn; Addison Wesley, 1981.

    7.  ANATOMY OF LISP, John Allen; McGraw-Hill, 1978.

    8.  ARTIFICIAL INTELLIGENCE PROGRAMMING, Eugene Charniak, Christopher K.
    Riesbeck, and Drew V. McDermott; Lawrence Erlbaum Associates, 1980.

--
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa:   bobgian%psuvax1.bitnet@Berkeley    Bitnet: bobgian@PSUVAX1.BITNET
CSnet:  bobgian@penn-state.csnet           UUCP:   allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************

∂05-Jan-84  1502	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #3 
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Jan 84  14:59:11 PST
Date: Wed  4 Jan 1984 17:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #3
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 3

Today's Topics:
  Course - Penn State's First Undergrad AI Course
----------------------------------------------------------------------

Date: 31 Dec 83 15:18:20-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Penn State's First Undergrad AI Course
Article-I.D.: psuvax.380

Last fall I taught Penn State's first ever undergrad AI course.  It
attracted 150 students, including about 20 faculty auditors.  I've gotten
requests from several people initiating AI courses elsewhere, and I'm
posting this and the next 6 items in hopes they may help others.

  1.  General Information
  2.  Syllabus (slightly more detailed topic outline)
  3.  First exam
  4.  Second exam
  5.  Third exam
  6.  Overview of how it went.

I'll be giving this course again, and I hate to do anything exactly the
same twice.  I welcome comments and suggestions from all net buddies!

        -- Bob

  [Due to the length of Bob's submission, I will send the three
  exams as a separate digest.  Bob's proposal for a network AI course
  associated with his spring semester curriculum was published in
  the previous AIList issue; that was entirely separate from the
  following material.  -- Ken Laws]

--
Spoken: Bob Giansiracusa
Bell:   814-865-9507
Bitnet: bobgian@PSUVAX1.BITNET
Arpa:   bobgian%psuvax1.bitnet@Berkeley
CSnet:  bobgian@penn-state.csnet
UUCP:   allegra!psuvax!bobgian
USnail: Dept of Comp Sci, Penn State Univ, University Park, PA 16802

------------------------------

Date: 31 Dec 83 15:19:52-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course, Part 1/6
Article-I.D.: psuvax.381

CMPSC 481:  INTRODUCTION TO ARTIFICIAL INTELLIGENCE

An introduction to the theory, research paradigms, implementation techniques,
and philosopies of Artificial Intelligence considered both as a science of
natural intelligence and as the engineering of mechanical intelligence.


OBJECTIVES  --  To provide:

   1.  An understanding of the principles of Artificial Intelligence;
   2.  An appreciation for the power and complexity of Natural Intelligence;
   3.  A viewpoint on programming different from and complementary to the
       viewpoints engendered by other languages in common use;
   4.  The motivation and tools for developing good programming style;
   5.  An appreciation for the power of abstraction at all levels of program
       design, especially via embedded compilers and interpreters;
   6.  A sense of the excitement at the forefront of AI research; and
   7.  An appreciation for the tremendous impact the field has had and will
       continue to have on our perception of our place in the Universe.


TOPIC SUMMARY:

  INTRODUCTION:  What is "Intelligence"?
    Computer modeling of "intelligent" human performance.  The Turing Test.
    Brief history of AI.  Relation of AI to psychology, computer science,
    management, engineering, mathematics.

  PRELUDE AND FUGUE ON THE "SECRET OF INTELLIGENCE":
    "What is a Brain that it may possess Intelligence, and Intelligence that
    it may inhabit a Brain?"  Introduction to Formal Systems, Physical Symbol
    Systems, and Multilevel Interpreters.  Necessity and Sufficiency of
    Physical Symbol Systems as the basis for intelligence.

  REPRESENTATION OF PROBLEMS, GOALS, ACTIONS, AND KNOWLEDGE:
    State Space, Predicate Calculus, Production Systems, Procedural
    Representations, Semantic Networks, Frames and Scripts.

  THE "PROBLEM-SOLVING" PARADIGM AND TECHNIQUES:
    Generate and Test, Heuristic Search (Search WITH Heuristics,
    Search FOR Heuristics), Game Trees, Minimax, Problem Decomposition,
    Means-Ends Analysis, The General Problem Solver (GPS).

  LISP PROGRAMMING:
    Symbolic Expressions and Symbol Manipulation, Data Structures,
    Evaluation and Quotation, Predicates, Input/Output, Recursion.
    Declarative and Procedural knowledge representation in LISP.

  LISP DETAILS:
    Storage Mapping, the Free List, and Garbage Collection,
    Binding strategies and the concept of the "Environment", Data-Driven
    Programming, Message-Passing, The MIT Lisp Machine "Flavor" system.

  LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS:
    Frames and other Knowledge Representation Languages, Discrimination
    Nets, "Higher" High-Level Languages:  PLANNER, CONNIVER, PROLOG.

  LOGIC, RULE-BASED SYSTEMS, AND INFERENCE:
    Logic: Axioms, Rules of Inference, Theorems, Truth, Provability.
    Production Systems: Rule Interpreters, Forward/Backward Chaining.
    Expert Systems: Applied Knowledge Representation and Inference.
    Data Dependencies, Non-Monotonic Logic, and Truth-Maintenance Systems,
    Theorem Proving, Question Answering, and Planning systems.

  THE UNDERSTANDING OF NATURAL LANGUAGE:
    Formal Linguistics: Grammars and Machines, the Chomsky Hierarchy.
    Syntactic Representation: Augmented Transition Networks (ATNs).
    Semantic Representation: Conceptual Dependency, Story Understanding.
    Spoken Language Understanding.

  ROBOTICS: Machine Vision, Manipulator and Locomotion Control.

  MACHINE LEARNING:
    The Spectrum of Learning: Learning by Adaptation, Learning by Being
      Told, Learning from Examples, Learning by Analogy, Learning by
      Experimentation, Learning by Observation and Discovery.
    Model Induction via Generate-and-Test, Automatic Theory Formation.
    A Model for Intellectual Evolution.

  RECAPITULATION AND CODA:
    The knowledge representation and problem-solving paradigms of AI.
    The key ideas and viewpoints in the modeling and creation of intelligence.
    Is there more (or less) to Intelligence, Consciousness, the Soul?
    Prospectus for the future.


Handouts for the course include:

1.  Computer Science as Empirical Inquiry: Symbols and Search.  1975 Turing
Award Lecture by Allen Newell and Herb Simon; Communications of the ACM,
Vol. 19, No. 3, March 1976.

2.  Steps Toward Artificial Intelligence.  Marvin Minsky; Proceedings of the
IRE, Jan. 1961.

3.  Computing Machinery and Intelligence.  Alan Turing; Mind (Turing's
original proposal for the "Turing Test").

4.  Exploring the Labyrinth of the Mind.  James Gleick; New York Times
Magazine, August 21, 1983 (article about Doug Hofstadter's recent work).


TEXTBOOKS:

1.  ARTIFICIAL INTELLIGENCE, Patrick H. Winston; Addison Wesley, 1983.
Will be available from publisher in early 1984.  I will distribute a
copy printed from Patrick's computer-typeset manuscript.

2.  LISP, Patrick Winston and Berthold K. P. Horn; Addison Wesley, 1981.
Excellent introductory programming text, illustrating many AI implementation
techniques at a level accessible to novice programmers.

4.  GODEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Douglas R. Hofstadter;
Basic Books, 1979.  One of the most entertaining books on the subject of AI,
formal systems, and symbolic modeling of intelligence.

5.  THE HANDBOOK OF ARTIFICIAL INTELLIGENCE, Avron Barr, Paul Cohen, and
Edward Feigenbaum; William Kaufman Press, 1981 and 1982.  Comes as a three
volume set.  Excellent (the best available), but the full set costs over $100.

6.  ANATOMY OF LISP, John Allen; McGraw-Hill, 1978.  Excellent text on the
definition and implementation of LISP, sufficient to enable one to write a
complete LISP interpreter.

------------------------------

Date: 31 Dec 83 15:21:46-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 2/6  (Topic Outline)
Article-I.D.: psuvax.382

CMPSC 481:  INTRODUCTION TO ARTIFICIAL INTELLIGENCE


TOPIC OUTLINE:

   INTRODUCTION:  What is "Intelligence"?

   Computer modeling of "intelligent" human performance.  Turing Test.
   Brief history of AI.  Examples of "intelligent" programs:  Evan's Geometric
   Analogies, the Logic Theorist, General Problem Solver, Winograd's English
   language conversing blocks world program (SHRDLU), MACSYMA, MYCIN, DENDRAL.

   PRELUDE AND FUGUE ON THE "SECRET OF INTELLIGENCE":

   "What is a Brain that it may possess Intelligence, and Intelligence that
   it may inhabit a Brain?"  Introduction to Formal Systems, Physical Symbol
   Systems, and Multilevel Interpreters.

   REPRESENTATION OF PROBLEMS, GOALS, ACTIONS, AND KNOWLEDGE:

   State Space problem formulations.  Predicate Calculus.  Semantic Networks.
   Production Systems.  Frames and Scripts.

   SEARCH:

   Representation of problem-solving as graph search.
   "Blind" graph search:
      Depth-first, Breadth-first.
   Heuristic graph search:
      Best-first, Branch and Bound, Hill-Climbing.
   Representation of game-playing as tree search:
      Static Evaluation, Minimax, Alpha-Beta.
   Heuristic Search as a General Paradigm:
      Search WITH Heuristics, Search FOR Heuristics

   THE GENERAL PROBLEM SOLVER (GPS) AS A MODEL OF INTELLIGENCE:

   Goals and Subgoals -- problem decomposition
   Difference-Operator Tables -- the solution to subproblems
   Does the model fit?  Does GPS work?

   EXPERT SYSTEMS AND KNOWLEDGE ENGINEERING:

   Representation of Knowledge:  The "Production System" Movement
   The components:
      Knowledge Base
      Inference Engine
   Examples of famous systems:
      MYCIN, TEIRESIAS, DENDRAL, MACSYMA, PROSPECTOR

   INTRODUCTION TO LISP PROGRAMMING:

   Symbolic expressions and symbol manipulation:
      Basic data types
         Symbols
            The special symbols T and NIL
         Numbers
         Functions
      Assignment of Values to Symbols (SETQ)
      Objects constructed from basic types
         Constructor functions:  CONS, LIST, and APPEND
         Accessor functions:  CAR, CDR
   Evaluation and Quotation
   Predicates
   Definition of Functions (DEFUN)
   Flow of Control (COND, PROG, DO)
   Input and Output (READ, PRINT, TYI, TYO, and friends)

   REPRESENTATION OF DECLARATIVE KNOWLEDGE IN LISP:

   Built-in representation mechanisms
      Property lists
      Arrays
   User-definable data structures
      Data-structure generating macros (DEFSTRUCT)
   Manipulation of List Structure
      "Pure" operations (CONS, LIST, APPEND, REVERSE)
      "Impure" operations (RPLACA and RPLACD, NCONC, NREVERSE)
   Storage Mapping, the Free List, and Garbage Collection

   REPRESENTATION OF PROCEDURAL KNOWLEDGE IN LISP:

   Types of Functions
      Expr:  Call by Value
      Fexpr:  Call by Name
      Macros and macro-expansion
   Functions as Values
      APPLY, FUNCALL, LAMBDA expressions
      Mapping operators (MAPCAR and friends)
      Functional Arguments (FUNARGS)
      Functional Returned Values (FUNVALS)

   THE MEANING OF "VALUE":

   Assignment of values to symbols
   Binding of values to symbols
      "Local" vs "Global" variables
      "Dynamic" vs "Lexical" binding
      "Shallow" vs "Deep" binding
   The concept of the "Environment"

   "VALUES" AND THE OBJECT-CENTERED VIEW OF PROGRAMMING:

   Data-Driven programming
   Message-passing
   Information Hiding
   Safety through Modularity
   The MIT Lisp Machine "Flavor" system

   LISP'S TALENTS IN REPRESENTATION AND SEARCH:

   Representation of symbolic structures in LISP
      Predicate Calculus
      Rule-Based Expert Systems (the Knowledge Base examined)
      Frames
   Search Strategies in LISP
      Breadth-first, Depth-first, Best-first search
      Tree search and the simplicity of recursion
   Interpretation of symbolic structures in LISP
      Rule-Based Expert Systems (the Inference Engine examined)
      Symbolic Mathematical Manipulation
         Differentiation and Integration
      Symbolic Pattern Matching
         The DOCTOR program (ELIZA)

   LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS

   Frames and other Knowledge Representation Languages
   Discrimination Nets
   Augmented Transition Networks (ATNs) as a specification of English syntax
   Interpretation of ATNs
   Compilation of ATNs
   Alternative Control Structures
      Pattern-Directed Inference Systems (production system interpreters)
      Agendas (best-first search)
      Chronological Backtracking (depth-first search)
      Dependency-Directed Backtracking
   Data Dependencies, Non-Monotonic Logic, and Truth-Maintenance Systems
   "Higher" High-Level Languages:  PLANNER, CONNIVER

   PROBLEM SOLVING AND PLANNING:

   Hierarchical models of planning
      GPS, STRIPS, ABSTRIPS

   Non-Hierarchical models of planning
      NOAH, MOLGEN

   THE UNDERSTANDING OF NATURAL LANGUAGE:

   The History of "Machine Translation" -- a seemingly simple task
   The Failure of "Machine Translation" -- the need for deeper understanding
   The Syntactic Approach
      Grammars and Machines -- the Chomsky Hierarchy
      RTNs, ATNs, and the work of Terry Winograd
   The Semantic Approach
      Conceptual Dependency and the work of Roger Schank
   Spoken Language Understanding
      HEARSAY
      HARPY

   ROBOTICS:

   Machine Vision
      Early visual processing (a signal processing approach)
      Scene Analysis and Image Understanding (a symbolic processing approach)
   Manipulator and Locomotion Control
      Statics, Dynamics, and Control issues
      Symbolic planning of movements

   MACHINE LEARNING:

   Rote Learning and Learning by Adaptation
      Samuel's Checker player
   Learning from Examples
      Winston's ARCH system
      Mitchell's Version Space approach
   Learning by Planning and Experimentation
      Samuel's program revisited
      Sussman's HACKER
      Mitchell's LEX
   Learning by Heuristically Guided Discovery
      Lenat's AM (Automated Mathematician)
      Extending the Heuristics:  EURISKO
   Model Induction via Generate-and-Test
      The META-DENDRAL project
   Automatic Formation of Scientific Theories
      Langley's BACON project
   A Model for Intellectual Evolution (my own work)

   RECAP ON THE PRELUDE AND FUGUE:

   Formal Systems, Physical Symbol Systems, and Multilevel Interpreters
   revisited -- are they NECESSARY?  are they SUFFICIENT?  Is there more
   (or less) to Intelligence, Consciousness, the Soul?

   SUMMARY, CONCLUSIONS, AND FORECASTS:

   The representation of knowledge in Artificial Intelligence
   The problem-solving paradigms of Artificial Intelligence
   The key ideas and viewpoints in the modeling and creation of intelligence
   The results to date of the noble effort
   Prospectus for the future


------------------------------

Date: 31 Dec 83 15:28:32-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 6/6  (Overview)
Article-I.D.: psuvax.386

A couple of notes about how the course went.  Interest was high, but the
main problem I found is that Penn State students are VERY strongly
conditioned to work for grades and little else.  Most teachers bore them,
expect them to memorize lectures and regurgitate on exams, and students
then get drunk (over 50 frats here) and promptly forget all.  Initially
I tried to teach, but I soon realized that PEOPLE CAN LEARN (if they
really want to) BUT NOBODY CAN TEACH (students who don't want to learn).
As the course evolved my role became less "information courier" and more
"imagination provoker".  I designed exams NOT to measure learning but to
provoke thinking (and thereby learning).  The first exam (on semantic
nets) was given just BEFORE covering that topic in lecture -- students
had a hell of a hard time on the exam, but they sure sat up and paid
attention to the next week's lectures!

For the second exam I announced that TWO exams were being given: an easy
one (if they sat on one side of the room) and a hard one (on other side).
Actually the exams were identical.  (This explains the first question.)
The winning question submitted from the audience related to the chapter
in GODEL, ESCHER, BACH on the MU system: I gave a few axioms and inference
rules and then asked whether a given wff was a theorem.

The third exam was intended ENTIRELY to provoke discussion and NOT AT ALL
to measure anything.  It started with deadly seriousness, then (about 20
minutes into the exam) a few "audience plants" started acting out a
prearranged script which included discussing some of the questions and
writing some answers on the blackboard.  The attempt was to puncture the
"exam mentality" and generate some hot-blooded debate (you'll see what I
mean when you see the questions).  Even the Teaching Assistants were kept
in the dark about this "script"!  Overall, the attempt failed, but many
people did at least tell me that taking the exams was the most fun part
of the course!

With this lead-in, you probably have a clearer picture of some of the
motivations behind the spring term course.  To put it bluntly: I CANNOT
TEACH AI.  I CAN ONLY HOPE TO INSPIRE INTERESTED STUDENTS TO WANT TO LEARN
AI.  I'LL DO ANYTHING I CAN THINK OF WHICH INCREASES THAT INSPIRATION.

The motivational factors also explain my somewhat unusual grading system.
I graded on creativity, imagination, inspiration, desire, energy, enthusiasm,
and gusto.  These were partly measured by the exams, partly by the energy
expended on several optional projects (and term paper topics), and partly
by my seat-of-the-pants estimate of how determined a student was to DO real
AI.  This school prefers strict objective measures of student performance.
Tough.

This may all be of absolutely no relevance to others teaching AI.  Maybe
I'm just weird.  I try to cultivate that image, for it seems to attract
the best and brightest students!

					-- Bob Giansiracusa

------------------------------

End of AIList Digest
********************

∂05-Jan-84  1939	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #4 
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Jan 84  19:37:47 PST
Date: Thu  5 Jan 1984 11:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #4
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 4

Today's Topics:
  Course - PSU's First AI Course (continued)
----------------------------------------------------------------------

Date: 31 Dec 83 15:23:38-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 3/6  (First Exam)
Article-I.D.: psuvax.383

[The intent and application of the following three exams was described
in the previous digest issue.  The exams were intended to look difficult
but to be fun to take.  -- KIL]


********        ARTIFICIAL INTELLIGENCE  --  First Exam        ********

The field of Artificial Intelligence studies the modeling of human
intelligence in the hope of constructing artificial devices that display
similar behavior.  This exam is designed to study your ability to model
artificial intelligence in the hope of improving natural devices that
display similar behavior.  Please read ALL the questions first, introspect
on how an AI system might solve these problems, then simulate that system.
(Please do all work on separate sheets of paper.)


EASY PROBLEM:

The rules for differentiating polynomials can be expressed as follows:

IF the input is:  (A * X ↑ 3) + (B * X ↑ 2) + (C * X ↑ 1) + (D * X ↑ 0)

THEN the output is:
 (3 * A * X ↑ 2) + (2 * B * X ↑ 1) + (1 * C * X ↑ 0) + (0 * D * X ↑ -1)

(where "*" indicates multiplication and "↑" indicates exponentiation).

Note that all letters here indicate SYMBOLIC VARIABLES (as in algebra),
not NUMERICAL VALUES (as in FORTRAN).


1.  Can you induce from this sample the general rule for polynomial
differentiation?  Express that rule in English or Mathematical notation.
(The mathematicians in the group may have some difficulty here.)

2.  Can you translate your "informal" specification of the differentiation
rule into a precise statement of an inference rule in a Physical Symbol
System?  That is, define a set of objects and relations, a notation for
expressing them (hint: it doesn't hurt for the notation to look somewhat
like a familiar programming language which was invented to do mathematical
notation), and a symbolic transformation rule that encodes the rule of
inference representing differentiation.

3.  Can you now IMPLEMENT your Physical Symbol System using some familiar
programming language?  That is, write a program which takes as input a
data structure encoding your symbolic representation of a polynomial and
returns a data structure encoding the representation of its derivative.
(Hint as a check on infinite loops:  this program can be done in six
or fewer lines of code.  Don't be afraid to define a utility function
or two if it helps.)


SLIGHTLY HARDER PROBLEM:

Consider a world consisting of one block (a small wooden cubical block)
standing on the floor in the middle of a room.  A fly is perched on the
South wall, looking North at the block.  We want to represent the world
as seen by the fly.  In the fly's world the only thing that matters is
the position of that block.  Let's represent the world by a graph
consisting of a single node and no links to any other nodes.  Easy enough.

4.  Now consider a more complicated world.  There are TWO blocks, placed
apart from each other along an East/West line.  From the fly's point of
view, Block A (the western block) is TO-THE-LEFT-OF Block B (the eastern
block), and Block B has a similar relationship (TO-THE-RIGHT-OF) to
Block A.  Draw your symbolic representation of the situation as a graph
with nodes for the blocks and labeled links for the two relationships
which hold between the blocks.  (Believe it or not, you have just invented
the representation mechanism called a "semantic network".)

5.  Now the fly moves to the northern wall, looking south.  Draw the new
semantic network which represents the way the blocks look to him from his
new vantage point.

6.  What you have diagrammed in the above two steps is a Physical Symbol
System: a symbolic representation of a situation coupled with a process
for making changes in the representation which correspond homomorphically
with changes in the real world represented by the symbol system.
Unfortunately, your symbol system does not yet have a concrete
representation for this changing process.  To make things more concrete,
let's transform to another Physical Symbol System which can encode
EXPLICITLY the representation both of the WORLD (as seen by the fly)
and of HOW THE WORLD CHANGES when the fly moves.

Invent a representation for your semantic network using some familiar
programming language.  Remember what is being modeled are OBJECTS (the
blocks) and RELATIONS between the objects.  Hint: you might like to
use property lists, but please feel no obligations to do so.

7.  Now the clincher which demonstrates the power of the idea that a
physical symbol system can represent PROCESSES as well as OBJECTS and
RELATIONS.  Write a program which transforms the WORLD-DESCRIPTION for
FLY-ON-SOUTH-WALL to WORLD-DESCRIPTION for FLY-ON-NORTH-WALL.  The
program should be a single function (with auxiliaries if you like)
which takes two arguments, the symbol SOUTH for the initial wall and
NORTH for target wall, uses a global symbol whose value is your semantic
network representing the world seen from the south wall, and returns
T if successful and NIL if not.  As a side effect, the function should
CHANGE the symbolic structure representing the world so that afterward
it represents the blocks as seen by the fly from the north wall.
You might care to do this in two steps: first describing in English or
diagrams what is going on and then writing code to do it.

8.  The world is getting slightly more complex.  Now there are four
blocks, A and B as before (spread apart along an East/West line), C
which is ON-TOP-OF B, and D which is just to the north of (ie, in back
of when seen from the south) B.  Let's see your semantic network in
both graphical and Lisp forms.  The fly is on South wall, looking North.
(Note that we mean "directly left-of" and so on.  A is LEFT-OF B but has
NO relation to D.)

9.  Generalize the code you wrote for question 4 (if you haven't already)
so that it correctly transforms the world seen by the fly from ANY of
the four walls (NORTH, EAST, SOUTH, and WEST) to that seen from any other
(including the same) wall.  What I mean by "generalize" is don't write
code that works only for the two-block or four-block worlds; code it so
it will work for ANY semantic network representing a world consisting of
ANY number of blocks with arbitrary relations between them chosen from
the set {LEFT-OF, RIGHT-OF, IN-FRONT-OF, IN-BACK-OF, ON-TOP-OF, UNDER}.
(Hint: if you are into group theory you might find a way to do this with
only ONE canonical transformation; otherwise just try a few examples
until you catch on.)

10.  Up to now we have been assuming the fly is always right-side-up.
Can you do question 6 under the assumption that the fly sometimes perches
on the wall upside-down?  Have your function take two extra arguments
(whose values are RIGHT-SIDE-UP or UPSIDE-DOWN) to specify the fly's
vertical orientation on the initial and final walls.

11.  Up to now we have been modeling the WORLD AS SEEN BY THE FLY.  If
the fly moves, the world changes.  Why is this approach no good when
we allow more flies into the room and wish to model the situation from
ANY of their perspectives?

12.  What can be done to fix the problem you pointed out above?  That is,
redefine the "axioms" of your representation so it works in the "multiple
conscious agent" case.  (Hint: new axioms might include new names for
the relations.)

13.  In your new representation, the WORLD is a static object, while we
have functions called "projectors" which given the WORLD and a vantage
point (a symbol from the set {NORTH, EAST, SOUTH, WEST} and another from
the set {RIGHT-SIDE-UP, UPSIDE-DOWN}) return a symbolic description (a
"projection") of the world as seen from that vantage point.  For the
reasons you gave in answer to question 11, the projectors CANNOT HAVE
SIDE EFFECTS.  Write the projector function.

14.  Now let's implement a perceptual cognitive model builder, a program
that takes as input a sensory description (a symbolic structure which
represents the world as seen from a particular vantage point) and a
description of the vantage point and returns a "static world descriptor"
which is invariant with respect to vantage point.  Code up such a model
builder, using for input a semantic network of the type you used in
questions 6 through 10 and for output a semantic network of the type
used in questions 12 and 13.  (Note that this function in nothing more
than the inverse of the projector from question 13.)


********    THAT'S IT !!!    THAT'S IT !!!    THAT'S IT !!!    ********


SOME HELPFUL LISP FUNCTIONS
You may use these plus anything else discussed in class.

Function      Argument description          Return value     Side effect

PUTPROP <symbol> <value> <property-name> ==>  <value>       adds property
GET <symbol> <property-name>             ==>  <value>
REMPROP <symbol> <property-name>         ==>  <value>    removes property


***********************************************************************

					-- Bob Giansiracusa

------------------------------

Date: 31 Dec 83 15:25:34-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 4/6  (Second Exam)
Article-I.D.: psuvax.384

1.  (20) Why are you now sitting on this side of the room?  Can you cite
an AI system which used a similar strategy in deciding what to do?

2.  (10) Explain the difference between vs CHRONOLOGICAL and DEPENDENCY-
DIRECTED backtracking.

3.  (10) Compare and contrast PRODUCTION SYSTEMS and SEMANTIC NETWORKS as
far as how they work, what they can represent, what type of problems are
well-suited for solution using that type of knowledge representation.

4.  (20) Describe the following searches in detail.  In detail means:
 1) How do they work??           2) How are they related to each other??
 3) What are their advantages??  4) What are their disadvantages??
      Candidate methods:
         1) Depth-first                 2) Breadth-first
         3) Hill-climbing               4) Beam search
         5) Best-first                  6) Branch-and-bound
         7) Dynamic Programming         8) A*

5.  (10) What are the characteristics of good generators for
the GENERATE and TEST problem-solving method?

6.  (10) Describe the ideas behind Mini-Max.  Describe the ideas behind
Alpha-Beta.  How do you use the two of them together and why would you
want to??

7.  (50) Godel's Incompleteness Theorem states that any consistent and
sufficiently complex formal system MUST express truths which cannot be
proved within the formal system.  Assume that THIS theorem is true.
  1.  If UNPROVABLE, how did Godel prove it?
  2.  If PROVABLE, provide an example of a true but unprovable statement.

8.  (40) Prove that this exam is unfinishable correctly; that is, prove
that this question is unsolvable.

9.  (50) Is human behavior governed by PREDESTINATION or FREE-WILL?  How
could you design a formal system to solve problems like that (that is, to
reason about "non-logical" concepts)?

10.  (40) Assume only ONE question on this exam were to be graded -- the
question that is answered by the FEWEST number of people.  How would you
decide what to do?  Show the productions such a system might use.

11.  (100) You will be given extra credit (up to 100 points) if by 12:10
pm today you bring to the staff a question.  If YOUR question is chosen,
it will be asked and everybody else given 10 points for a correct answer.
YOU will be given 100 points for a correct answer MINUS ONE POINT FOR EACH
CORRECT ANSWER GIVEN BY ANOTHER CLASS MEMBER.  What is your question?

					-- Bob Giansiracusa

------------------------------

Date: 31 Dec 83 15:27:19-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 5/6  (Third Exam)
Article-I.D.: psuvax.385

1.  What is the sum of the first N positive integers?  That is, what is:

         [put here the sigma-sign notation for the sum]

2.  Prove that the your answer works for any N > 0.

3.  What is the sum of the squares of the first N positive integers:

         [put here the sigma-sign notation for the sum]

4.  Again, prove it.

5.  The proofs you gave (at least, if you are utilizing "traditional"
mathematical background,) are based on "mathematical induction".
Briefly state this principle and explain why it works.

6.  If you are like most people, your definition will work only over the
domain of NATURAL NUMBERS (positive integers).  Can this definition be
extended to work over ANY countable domain?

7.  Consider the lattice of points in N-dimensional space having integer
valued coordinates.  Is this space countable?

8.  Write a program (or express an algorithm in pseudocode) which returns
the number of points in this space (the one in #7) inside an N-sphere of
radius R (R is a real number > 0).

9.  The domains you have considered so far are all countable.  The problem
solving methods you have used (if you're "normal") are based on
mathematical induction.  Is it possible to extend the principle of
mathematical induction (and recursive programming) to NON-COUNTABLE
domains?

10.  If you answered #9 NO, why not?  If you answered it YES, how?

11.  Problems #1 and #3 require you to perform INDUCTIVE REASONING
(a related but different use of the term "induction").  Discuss some of
the issues involved in getting a computer to perform this process
automatically.  (I mean the process of generating a finite symbolic
representation which when evaluated will return the partial sum for
an infinite sequence.)

12.  Consider the "sequence extrapolation" task: given a finite sequence
of symbols, predict the next few terms of the sequence or give a rule
which can generate ALL the terms of the sequence.  Is this problem
uniquely solvable?  Why or why not?

13.  If you answered #12 YES, how would you build a computer program to
do so?

14.  If you answered #12 NO, how could you constrain the problem to make
it uniquely solvable?  How would you build a program to solve the
constrained problem?

15.  Mankind is faced with the threat of nuclear annihilation.  Is there
anything the field of AI has to offer which might help avert that threat?
(Don't just say "yes" or "no"; come up with something real.)

16.  Assuming mankind survives the nuclear age, it is very likely that
ethical issues relating to AI and the use of computers will have very
much to do with the view the "person on the street" has of the human
purpose and role in the Universe.  In what way can AI researchers plan
NOW so that these ethical issues are resolved to the benefit of the
greatest number of people?

17.  Could it be that our (humankind's) purpose on earth is to invent
and build the species which will be the next in the evolutionary path?
Should we do so?  How?  Why?  Why not?

18.  Suppose you have just discovered the "secret" of Artificial
Intelligence; that is, you (working alone and in secret) have figured
out a way (new hardware, new programming methodology, whatever) to build
an artificial device which is MORE INTELLIGENT, BY ANY DEFINITION, BY
ANY TEST WHATSOEVER, that any human being.  What do you do with this
knowledge?  Explain the pros and cons of several choices.

19.  Question #9 indicates that SO FAR all physical symbol systems have
dealt ONLY with discrete domains.  Is it possible to generalize the
idea to continuous domains?  Since many aspects of the human nervous
system function on a continuous (as opposed to discrete) basis, is it
possible that the invention of CONTINUOUS PHYSICAL SYMBOL SYSTEMS might
provide part of the key to the "secret of intelligence"?

20.  What grade do you feel you DESERVE in this course?  Why?  What
grade do you WANT?  Why?  If the two differ, is there anything you
want to do to reduce the difference?  Why or Why Not?  What is it?
Why is it (or is it not) worth doing?

--
Spoken: Bob Giansiracusa
Bell:   814-865-9507
Bitnet: bobgian@PSUVAX1.BITNET
Arpa:   bobgian%psuvax1.bitnet@Berkeley
CSnet:  bobgian@penn-state.csnet
UUCP:   allegra!psuvax!bobgian
USnail: Dept of Comp Sci, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************

∂09-Jan-84  1641	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #5 
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Jan 84  16:41:38 PST
Date: Mon  9 Jan 1984 14:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #5
To: AIList@SRI-AI


AIList Digest            Tuesday, 10 Jan 1984       Volume 2 : Issue 5

Today's Topics:
  AI and Weather Forecasting - Request,
  Expert Systems - Request,
  Pattern Recognition & Cognition,
  Courses - Reaction to PSU's AI Course,
  Programming Lanuages - LISP Advantages
----------------------------------------------------------------------

Date: Mon 9 Jan 84 14:15:13-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI and Weather Forecasting

I have been talking with people interested in AI techniques for
weather prediction and meteorological analysis.  I would appreciate
pointers to any literature or current work on this subject, especially

    * knowledge representations for spatial/temporal reasoning;
    * symbolic description of weather patterns;
    * capture of forecasting expertise;
    * inference methods for estimating meteorological variables
      from (spatially and temporally) sparse data;
    * methods of interfacing symbolic knowledge and heuristic
      reasoning with numerical simulation models;
    * any weather-related expert systems.

I am aware of some recent work by Gaffney and Racer (NBS Trends and
Applications, 1983) and by Taniguchi et al. (6th Pat. Rec., 1982),
but I have not been following this field.  A bibliography or guide
to relevant literature would be welcome.

                                        -- Ken Laws

------------------------------

Date: 5 January 1984 13:47 est
From: RTaylor.5581i27TK at RADC-MULTICS
Subject: Expert Systems Info Request


Hi, y'all...I have the names (hopefully, correct) of four expert
systems/tools/environments (?).  I am interested in the "usual":  that
is, general info, who to contact, feedback from users, how to acquire
(if we want it), etc.  The four names I have are:  RUS, ALX, FRL, and
FRED.

Thanks.  Also, thanks to those who provided info previously...I have
info (similar to that requested above) on about 15 other
systems/tools/environments...some of the info is a little sketchy!

             Roz  (aka:  rtaylor at radc-multics)

------------------------------

Date: 3 Jan 84 20:38:52-PST (Tue)
From: decvax!genrad!mit-eddie!rh @ Ucb-Vax
Subject: Re: Loop detection and classical psychology
Article-I.D.: mit-eddi.1114

One of the truly amazing things about the human brain is that its pattern
recognition capabilities seem limitless (in extreme cases).  We don't even
have a satisfactory way to describe pattern recognition as it occurs in
our brains.  (Well, maybe we have something acceptable at a minimum level.
I'm always impressed by how well dollar-bill changers seem to work.)  As
a friend of mine put it, "the brain immediately rejects an infinite number
of wrong answers," when working on a problem.

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: Fri 6 Jan 84 10:11:01-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: PSU's First AI Course

Wow!  I actually think it's kind of neat (but, of course, very wacko).  I
particularly like making people think about the ethical and philosphical
considerations at the same time as their thinking about minimax, etc.

------------------------------

Date: Wed 4 Jan 84 17:23:38-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: AIList Digest   V2 #1

[in response to Herb Lin's questions]

Well, 2 more or less answers 1.   One of the main reasons why Lisp and not C
is the language of many people's choice for AI work is that you can easily cons
up at run time a piece of data which "is" the next action you are going to
take.   In most languages you are restricted to choosing from pre-written
actions, unless you include some kind of interpreter right there in your AI
program.   Another reason is that Lisp has all sorts of extensibility.

As for 3, the obvious response is that in Pascal control has to be routed to an
IF statement before it can do any good, whereas in a production system, control
automatically "goes" to any production that is applicable.   This is highly
over-simplified and may not be the answer you were looking for.

                                                - Richard

------------------------------

Date: Friday,  6 Jan 1984 13:10-PST
From: narain@rand-unix
Subject: Reply to Herb Lin: Why is Lisp good for AI?


A central issue in AI is knowledge representation.  Experimentation with  a
new  KR  scheme  often involves defining a new language. Often, definitions
and meanings of new  languages  are  conceived  of  naturally  in  terms of
recursive (hierarchical) structures.  For instance, many grammars of English-
like frontends are recursive, so  are  production  system  definitions,  so
are theorem provers.

The abstract machinery  underlying  Lisp,  the  Lambda  Calculus,  is  also
inherently recursive, yet very simple and powerful.  It involves the notion
of function application to symbolic expressions.  Functions can  themselves
be  symbolic  expressions.  Symbolic expressions provide a basis for SIMPLE
implementation   and   manipulation   of   complex   data/knowledge/program
structures.

It is therefore possible to easily interpret  new  language  primitives  in
terms of Lisp's already very high level primitives.  Thus, Lisp is a  great
"machine language" for AI.

The usefulness of a well understood, powerful, abstract  machinery  of  the
implementation language is probably more obvious when we  consider  Prolog.
The  logical  interpretation of Prolog programs helps considerably in their
development and verification.  Logic is a convenient specification language
for  a  lot  of  AI, and it is far easier to 'compile' those specifications
into a logic language like Prolog than  into  Pascal.  For  instance,  take
natural  language  front ends implemented in DCGs or database/expert-system
integrity and redundancy constraints.

The fact that programs can be considered as data is not true only of  Lisp.
Even in Pascal you can analyze a Pascal program.  The nice thing  in  Lisp,
however,  is  that  because  of  its  few  (but  very powerful) primitives,
programs tend to be simply structured and concise  (cf.  claims  in  recent
issues  of  this  bulletin that Lisp programs were much shorter than Pascal
programs).  So naturally it is simpler to analyze  Lisp  programs  in  Lisp
than it is to analyze Pascal programs in Pascal.

Of course,  Lisp  environments  have  evolved  for  over  two  decades  and
contribute  no  less to its desirability for AI.  Some of the nice features
include screen-oriented editors, interactiveness, debugging facilities, and
an extremely simple syntax.

I would greatly appreciate any comments on the above.

Sanjai Narain
Rand.

------------------------------

Date: 6 Jan 84 13:20:29-PST (Fri)
From: ihnp4!mit-eddie!rh @ Ucb-Vax
Subject: Re: Herb Lin's questons on LISP etc.
Article-I.D.: mit-eddi.1129

One of the problems with LISP, however, is it does not force one
to subscribe the code of good programming practices.  I've found
that the things I have written for my bridge-playing program (over
the last 18 months or so) have gotten incredibly crufty, with
some real brain-damaged patches.  Yeah, I realize it's my fault;
I'm not complaining about it because I love LISP, I just wanted
to mention some of the pitfalls for people to think about.  Right
now, I'm in the process of weeding out the cruft, trying to make
it more clearly modular, decrease the number of similar functions
and so on.  Sigh.

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: 7 January 1984 15:08 EST
From: Herb Lin <LIN @ MIT-ML>
Subject: my questions of last Digest on differences between PASCAL
         and LISP

So many people replied that I send my thanks to all via the list.  I
very much appreciate the time and effort people put into their
comments.

------------------------------

End of AIList Digest
********************

∂10-Jan-84  1336	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #6 
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84  13:34:22 PST
Date: Tue 10 Jan 1984 09:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #6
To: AIList@SRI-AI


AIList Digest            Tuesday, 10 Jan 1984       Volume 2 : Issue 6

Today's Topics:
  Humor,
  Seminars - Programming Styles & ALICE & 5th Generation,
  Courses - Geometric Data Structures & Programming Techniques & Linguistics
----------------------------------------------------------------------

Date: Mon, 9 Jan 84 08:45 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: An AI Joke

Last week a cartoon appeared in our local (Rochester NY) paper.  It was
by a fellow named Toles, a really excellent editorial cartoonist who
works out of, of all places, Buffalo:

Panel 1:

[medium view of the Duckburg Computer School building.  A word balloon
extends from one of the windows]
"A lot of you wonder why we have to spend so much time studying these
things."

Panel 2:

[same as panel 1]
"It so happens that they represent a lot of power.  And if we want to
understand and control that power, we have to study them."

Panel 3:

[interior view of a classroom full of personal computers.  At right,
several persons are entering.  At left, a PC speaks]
". . .so work hard and no talking.  Here they come."

Tickler (a mini-cartoon down in the corner):

[a lone PC speaks to the cartoonist]
"But I just HATE it when they touch me like that. . ."


Mark

------------------------------

Date: Sat, 7 Jan 84 20:02 PST
From: Vaughan Pratt <pratt@navajo>
Subject: Imminent garbage collection of Peter Coutts.  :=)

  [Here's another one, reprinted from the SU-SCORE bboard.  -- KIL]

Les Goldschlager is visiting us on sabbatical from Sydney University, and
stayed with us while looking for a place to stay.  We belatedly pointed him
at Peter Coutts, which he immediately investigated and found a place to
stay right away.  His comment was that no pointer to Peter Coutts existed
in any of the housing assistance services provided by Stanford, and that
therefore it seemed likely that it would be garbage collected soon.
-v

------------------------------

Date: 6 January 1984 23:48 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Seminar on Programming Styles in AI

                     DATE:      Thursday, January 12, 1984
                     TIME:      3.45 p.m.  Refreshments
                                4.00 p.m.  Lecture
                     PLACE:     NE43-8th Floor, AI Playroom


               PROGRAMMING STYLES IN ARTIFICIAL INTELLIGENCE

                              Herbert Stoyan
                   University of Erlangen, West Germany

                               ABSTRACT

Not much is clear about the scientific methods used in AI research.
Scientific methods are sets of rules used to collect knowledge about the
subject being researched.  AI is an experimental branch of computer science
which does not seem to use established programming methods.  In several
works on AI we can find the following method:

    1.  develop a new convenient programming style

    2.  invent a new programming language which supports the new style
        (or embed some appropriate elements into an existing AI language,
        such as LISP)

    3.  implement the language (interpretation as a first step is
        typically less efficient than compilation)

    4.  use the new programming style to make things easier.

A programming style is a way of programming guided by a speculative view of
a machine which works according to the programs.  A programming style is
not a programming method.  It may be detected by analyzing the text of a
completed program.  In general, it is possible to program in one
programming language according to the principles of various styles.  This
is true in spite of the fact that programming languages are usually
designed with some machine model (and therefore with some programming
style) in mind.  We discuss some of the AI programming styles.  These
include operator-oriented, logic-oriented, function-oriented, rule-
oriented, goal-oriented, event-oriented, state-oriented, constraint-
oriented, and object-oriented. (We shall not however discuss the common
instruction-oriented programming style).  We shall also give a more detailed
discussion of how an object-oriented programming style may be used in
conventional programming languages.

HOST:  Professor Ramesh Patil

------------------------------

Date: Mon 9 Jan 84 14:09:07-PST
From: Laws@SRI-AI
Subject: SRI Talk on ALICE, 1/23, 4:30pm, EK242


ALICE:  A parallel graph-reduction machine for declarative and other
languages.

SPEAKER -  John Darlington, Department of Computing, Imperial College,
           London
WHEN    -  Monday, January 23, 4:30pm
WHERE   -  AIC Conference Room, EK242

     [This is an SRI AI Center talk.  Contact Margaret Olender at
     MOLENDER@SRI-AI or 859-5923 if you would like to attend.  -- KIL]

                           ABSTRACT

Alice is a highly parallel-graph reduction machine being designed and
built at Imperial College.  Although designed for the efficient
execution of declarative languages, such as functional or logic
languages, ALICE is general purpose and can execute sequential
languages also.

This talk will describe the general model of computation, extended
graph reduction, that ALICE executes, outline how different languages
can be supported by this model, and describe the concrete architecture
being constructed.  A 24-processor prototype is planned for early
1985.  This will give a two-orders-of-magnitude improvement over a VAX
11/750 for derclarative languages. ALICE is being constructed out of
two building blocks, a custom-designed switching chip and the INMOS
transputer. So far, compilers for a functional language, several logic
languages, and LISP have been constructed.

------------------------------

Date: 9 Jan 1984 1556-PST
From: OAKLEY at SRI-CSL
Subject: SRI 5th Generation Talk


  Japan's 5th Generation Computer Project: Past, Present, and Future
      -- personal observations by a researcher of
         ETL (ElectroTechnical Laboratory)

                          Kokichi FUTATSUGI
                    Senior Research Scientist, ETL
                    International Fellow, SRI-CSL


    Talk on January 24, l984, in conference room EL369 at 10:00am.
    [This is an SRI Computer Science Laboratory talk.  Contact Mary Oakley
    at OAKLEY@SRI-AI or 859-5924 if you would like to attend.  -- KIL]


1 Introduction
  * general overview of Japan's research activities in
    computer science and technology
  * a personal view

2 Past -- pre-history of ICOT (the Institute of New Generation
  ComputerTechnology)
  * ETL's PIPS project
  * preliminary research and study activities
  * the establishment of ICOT

3 Present -- present activities
  * the organization of ICOT
  * research activities inside ICOT
  * research activities outside ICOT

4 Future -- ICOT's plans and general overview
  * ICOT's plans
  * relations to other research activities
  * some comments

------------------------------

Date: Thu 5 Jan 84 16:41:57-PST
From: Martti Mantyla <MANTYLA@SU-SIERRA.ARPA>
Subject: Data Structures & Algorithms for Geometric Problems

                    [Reprinted from the SU-SCORE bboard.]

                                  NEW COURSE:
                     EE392 DATA STRUCTURES AND ALGORITHMS
                            FOR GEOMETRIC PROBLEMS


Many   problems   arising  in  science  and  engineering  deal  with  geometric
information.  Engineering design  is  most  often  spatial  activity,  where  a
physical  shape  with  certain desired properties must be created.  Engineering
analysis also uses heavily information on the geometric form of the object.

The seminar Data Structures and Algorithms for Geometric  Problems  deals  with
problems  related to representing and processing data on the geometric shape of
an object in a computer.    It  will  concentrate  on  practically  interesting
solutions to tasks such as

   - representation of digital images,
   - representation of line figures,
   - representation of three-dimensional solid objects, and
   - representation of VLSI circuits.

The  point  of  view  taken  is  hence  slightly  different  from a "hard-core"
Computational Geometry view that  puts  emphasis  on  asymptotic  computational
complexity.    In  practice,  one  needs solutions that can be implemented in a
reasonable  time,  are  efficient  and  robust  enough,  and  can  support   an
interesting   scope  of  applications.    Of  growing  importance  is  to  find
representations  and  algorithms  for  geometry  that   are   appropriate   for
implementation in special hardware and VLSI in particular.

The seminar will be headed by

    Dr. Martti Mantyla (MaM)
    Visiting Scholar
    CSL/ERL 405
    7-9310
    MANTYLA@SU-SIERRA.ARPA

who  will  give  intruductory  talks.    Guest  speakers of the seminar include
well-known scientists and practitioners of the field such as Dr. Leo Guibas and
Dr. John Ousterhout.  Classes are held on

                             Tuesdays, 2:30 - 3:30
                                      in
                                    ERL 126

First class will be on 1/10.

The seminar should be of interest to  CS/EE  graduate  students  with  research
interests   in   computer   graphics,   computational   geometry,  or  computer
applications in engineering.

------------------------------

Date: 6 Jan 1984 1350-EST
From: KANT at CMU-CS-C.ARPA
Subject: AI Programming Techniques Course

                  [Reprinted from the CMUC bboard.]


           Announcing another action-packed AI mini-course!
                 Starting soon in the 5409 near you.

This course covers a variety of AI programming techniques and languages.
The lectures will assume a background equivalent to an introductory AI course
(such as the undergraduate course 15-380/381 or the graduate core course
15-780.)  They also assume that you have had at least a brief introduction to
LISP and a production-system language such as OPS5.

       15-880 A,  Artificial Intelligence Programming Techniques
                         MW 2:30-3:50, WeH 5409


T Jan 10        (Brief organizational meeting only)
W Jan 11        LISP: Basic Pattern Matching (Carbonell)
M Jan 16        LISP: Deductive Data Bases (Steele)
W Jan 18        LISP: Basic Control: backtracking, demons (Steele)
M Jan 23        LISP: Non-Standard Control Mechanisms (Carbonell)
W Jan 25        LISP: Semantic Grammar Interpreter (Carbonell)
M Jan 30        LISP: Case-Frame interpreter (Hayes)
W Feb 1         PROLOG I (Steele)
M Feb 6         PROLOG II (Steele)
W Feb 8         Reason Maintenance and Comparison with PROLOG (Steele)
M Feb 13        AI Programming Environments and Hardware I (Fahlman)
W Feb 15        AI Programming Environments and Hardware II (Fahlman)
M Feb 20        Schema Representation Languages I (Fox)
W Feb 22        Schema Representation Languages II (Fox)
W Feb 29        User-Interface Issues in AI (Hayes)
M Mar 5         Efficient Game Playing and Searching (Berliner)
W Mar 7         Production Systems: Basic Programming Techniques (Kant)
M Mar 12        Production Systems: OPS5 Programming (Kant)
W Mar 14        Efficiency and Measurement in Production Systems (Forgy)
M Mar 16        Implementing Diagnostic Systems as Production Systems (Kahn)
M Mar 26        Intelligent Tutoring Systems: GRAPES and ACT Implementations
                     (Anderson)
W Mar 28        Explanation and Knowledge Acquisition in Expert Systems
                     (McDermott)
M Apr 2         A Production System for Problem Solving: SOAR2 (Laird)
W Apr 4         Integrating Expert-System Tools with SRL (KAS, PSRL, PDS)
                     (Rychener)
M Apr 9         Additional Expert System Tools: EMYCIN, HEARSAY-III, ROSIE,
                   LOOPS, KEE (Rosenbloom)
W Apr 11        A Modifiable Production-System Architecture: PRISM (Langley)
M Apr 16        (additional topics open to negotiation)

------------------------------

Date: 9 Jan 1984 1238:48-EST
From: Lori Levin <LEVIN@CMU-CS-C.ARPA>
Subject: Linguistics Course

                  [Reprinted from the CMUC bboard.]

NATURAL LANGUAGE SYNTAX FOR COMPUTER SCIENTISTS

FRIDAYS  10:00 AM - 12:00
4605 Wean Hall

Lori Levin
Richmond Thomason
Department of Linguistics
University of Pittsburgh

This is an introduction to recent work in generative syntax.  The
course will deal with the formalism of some of the leading syntactic
theories as well as with methodological issues.  Computer scientists
find the formalism used by syntacticians easy to learn, and so the
course will begin at a fairly advanced level, though no special
knowledge of syntax will be presupposed.

We will begin with a sketch of the "Standard Theory," Chomsky's
approach of the mid-60's from which most of the current theories have
evolved.  Then we will examine Government-Binding Theory, the
transformational approach now favored at M.I.T.  Finally, we will
discuss in more detail two nontransformational theories that are more
computationally tractable and have figured in joint research projects
involving linguists, psychologists, and computer scientists:
Lexical-Functional Grammar and Generalized Context-Free Phrase
Structure Grammar.

------------------------------

End of AIList Digest
********************

∂16-Jan-84  2244	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #7 
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Jan 84  22:44:15 PST
Date: Mon 16 Jan 1984 21:55-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #7
To: AIList@SRI-AI


AIList Digest            Tuesday, 17 Jan 1984       Volume 2 : Issue 7

Today's Topics:
  Production Systems - Requests,
  Expert Systems - Software Debugging Aid,
  Logic Programming - Prolog Textbooks & Disjunction Problem,
  Alert - Fermat's Last Theorem Proven?,
  Seminars - Mulitprocessing Lisp & Lisp History,
  Conferences - Logic Programming Discount & POPL'84,
  Courses - PSU's First AI Course & Net AI Course
----------------------------------------------------------------------

Date: 11 Jan 1984 1151-PST
From: Jay <JAY@USC-ECLC>
Subject: Request for production systems

  I would like pointers  to free or  public domain production  systems
(running on Tops-20, Vax-Unix, or Vax-Vms) both interpreters (such  as
ross) and systems built up on them (such as emycin).  I am  especially
interested in Rosie, Ross, Ops5, and Emycin.  Please reply directly to
me.
j'

ARPA: jay@eclc

------------------------------

Date: Thu 12 Jan 84 12:13:20-MST
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Taxonomy of Production Systems

I'm looking for info on a formal taxonomy of production rule systems,
sufficiently precise that it can distinguish OPS5 from YAPS, but also say
that they're more similar than either of them is to Prolog.  The only
relevant material I've seen is the paper by Davis & King in MI 8, which
characterizes PSs in terms of syntax, complexity of LHS and RHS, control
structure, and "programmability" (seems to mean meta-rules).  This is
a start, but too vague to be implemented.  A formal taxonomy should
indicate where "holes" exist, that is, strange designs that nobody has
built.  Also, how would Georgeff's (Stanford STAN-CS-79-716) notion of
"controlled production systems" fit in?  He showed that CPSs are more
general than PSs, but then one can also show that any CPS can be represented
by some ordinary PS.  I'm particularly interested in formalization of
the different control strategies - are text order selection (as in Prolog)
and conflict resolution (as in OPS5) mutually exclusive, or can they be
intermixed (perhaps using text order to find 5 potential rules, then
conflict resolution to choose among the 5).  Presumably a sufficiently
precise taxonomy could answer these sorts of questions.  Has anyone
looked at these questions?

                                                        stan shebs

------------------------------

Date: 16 Jan 84 19:13:21 PST (Monday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Expert systems for software debugging?

Debugging is a black art, not at all algorithmic, but almost totally
heuristic.  There is a lot of expert knowledge around about how to debug
faulty programs, but it is rarely written down or systemetized.  Usually
it seems to reside solely in the minds of a few "debugging whizzes".

Does anyone know of an expert system that assists in software debugging?
Or any attempts (now or in the past) to produce such an expert?

/Ron

------------------------------

Date: 12 Jan 84 20:43:31-PST (Thu)
From: harpo!floyd!clyde!akgua!sb1!mb2c!uofm-cv!lah @ Ucb-Vax
Subject: prolog reference
Article-I.D.: uofm-cv.457

Could anybody give some references to good introductory book
on prolog?

------------------------------

Date: 14 Jan 84 14:50:57-PST (Sat)
From: decvax!duke!mcnc!unc!bts @ Ucb-Vax
Subject: Re: prolog reference
Article-I.D.: unc.6594

There's only one introductory book I know of, that's Clocksin
and Mellish's "Programming in Prolog", Springer-Verlag, 1981.
It's a silver paperback, probably still under $20.00.

For more information on the language, try Clark and Tarnlund's
"Logic Programming", Academic Press, 1982.  It's a white hard-
back, with an elephant on the cover.  The papers by Bruynooghe
and by Mellish tell a lot about Prolog inplementation.

Bruce Smith, UNC-Chapel Hill
decvax!duke!unc!bts     (USENET)
bts.unc@CSnet-Relay (lesser NETworks)

------------------------------

Date: 13 Jan 84 8:11:49-PST (Fri)
From: hplabs!hao!seismo!philabs!sbcs!debray @ Ucb-Vax
Subject: re: trivial reasoning problem?
Article-I.D.: sbcs.572

Re: Marcel Schoppers' problem: given two lamps A and B, such that:

        condition 1) at least one of them is on at any time; and
        condition 2) if A is on then B id off,

        we are to enumerate the possible configurations without an exhaustive
        generate-and-test strategy.

The following "pure" Prolog program that will generate the various
configurations without exhaustively generating all possible combinations:


  config(A, B) :- cond1(A, B), cond2(A, B).   /* both conditions must hold */

  cond1(1, ←).    /* at least one is on an any time ... condition 1 above */
  cond1(←, 1).

  cond2(1, 0).    /* if A is on then B is off */
  cond2(0, ←).    /* if A is off, B's value is a don't care */

executing Prolog gives:

| ?- config(A, B).

A = 1
B = 0 ;

A = 0
B = 1 ;

no
| ?- halt.
[ Prolog execution halted ]

Tracing the program shows that the configuration "A=0, B=0" is not generated.
This satisfies the "no-exhaustive-listing" criterion. Note that attempting
to encode the second condition above using "not" will be both (1) not pure
Horn Clause, and (2) using exhaustive generation and filtering.

Saumya Debray
Dept. of Computer Science
SUNY at Stony Brook

                {floyd, bunker, cbosgd, mcvax, cmcl2}!philabs!
                                                              \
        Usenet:                                                sbcs!debray
                                                              /
                   {allegra, teklabs, hp-pcd, metheus}!ogcvax!
        CSNet: debray@suny-sbcs@CSNet-Relay


[Several other messages discussing this problem and suggesting Prolog
code were printed in the Prolog Digest.  Different writers suggested
very different ways of structuring the problem.  -- KIL]


------------------------------

Date: Fri 13 Jan 84 11:16:21-CST
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Fermat's Last Theorem Proven?

                [Reprinted from the UTEXAS-20 bboard.]

There was a report last night on National Public Radio's All Things Considered
about a British mathematician named Arnold Arnold who claims to have
developed a new technique for dealing with multi-variable, high-dimensional
spaces.  The method apparently makes generation of large prime numbers
very easy, and has applications in genetics, the many-body problem, orbital
mechanics, etc.  Oh yeah, the proof to Fermat's Last Theorem falls out of
this as well!  The guy apparently has no academic credentials, and refuses
to publish in the journals because he's interested in selling his technique.
There was another mathematician named Jeffrey Colby who had been allowed
to examine Arnold's work on the condition he didn't disclose anything.
He claims the technique is all it's claimed to be, and shows what can
be done when somebody starts from pure ignorance not clouded with some
of the preconceptions of a formal mathematical education.

If anybody hears more about this, please pass it along.

Clive

------------------------------

Date: 12 Jan 84  2350 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Next week's CSD Colloquium.

                [Reprinted from the SU-SCORE bboard.]

  Dr. Richard P. Gabriel, Stanford CSD
  ``Queue-based Multi-processing Lisp''
  4:30pm Terman Auditorium, Jan 17th.

As the need for high-speed computers increases, the need for
multi-processors will be become more apparent. One of the major stumbling
blocks to the development of useful multi-processors has been the lack of
a good multi-processing language---one which is both powerful and
understandable to programmers.

Among the most compute-intensive programs are artificial intelligence (AI)
programs, and researchers hope that the potential degree of parallelism in
AI programs is higher than in many other applications.  In this talk I
will propose a version of Lisp which is multi-processed.  Unlike other
proposed multi-processing Lisps, this one will provide only a few very
powerful and intuitive primitives rather than a number of parallel
variants of familiar constructs.

The talk will introduce the language informally, and many examples along
with performance results will be shown.

------------------------------

Date: 13 January 1984 07:36 EST
From: Kent M Pitman <KMP @ MIT-MC>
Subject: What is Lisp today and how did it get that way?

                 [Reprinted from the MIT-MC bboard.]

                        Modern Day Lisp

        Time:   3:00pm
        Date:   Wednesdays and Fridays, 18-27 January
        Place:  8th Floor Playroom

The Lisp language has changed significantly in the past 5 years. Modern
Lisp dialects bear only a superficial resemblance to each other and to
their common parent dialects.

Why did these changes come about? Has progress been made? What have we
learned in 5 hectic years of rapid change? Where is Lisp going?

In a series of four lectures, we'll be surveying a number of the key
features that characterize modern day Lisps. The current plan is to touch
on at least the following topics:


        Scoping. The move away from dynamic scoping.
        Namespaces. Closures, Locales, Obarrays, Packages.
        Objects. Actors, Capsules, Flavors, and Structures.
        Signals. Errors and other unusual conditions.
        Input/Output. From streams to window systems.


The discussions will be more philosophical than technical. We'll be
looking at several Lisp dialects, not just one. These lectures are not
just something for hackers. They're aimed at just about anyone who uses
Lisp and wants an enhanced appreciation of the issues that have shaped
its design and evolution.

As it stands now, I'll be giving all of these talks, though there
is some chance there will be some guest lecturers on selected
topics. If you have questions or suggestions about the topics to be
discussed, feel free to contact me about them.

                        Kent Pitman (KMP@MC)
                        NE43-826, x5953

------------------------------

Date: Wed 11 Jan 84 16:55:02-PST
From: PEREIRA@SRI-AI.ARPA
Subject: IEEE Logic Programming Symposium (update)

              1984 International Symposium on
                      Logic Programming

                 Student Registration Rates


In our original symposium announcements, we failed to offer a student
registration rate. We would like to correct that situation now.
Officially enrolled students may attend the symposium for the reduced
rate of $75.00.

This rate includes the symposium itself (all three days) and one copy
of the symposium proceedings. It does not include the tutorial, the
banquet, or cocktail parties.  It does however, include the Casino
entertainment show.

Questions and requests for registration forms by US mail to:

   Doug DeGroot                           Fernando Pereira
   Program Chairman                       SRI International
   IBM Research                    or     333 Ravenswood Ave.
   P.O. Box 218                           Menlo Park, CA 94025
   Yorktown Heights, NY 10598             (415) 859-5494
   (914) 945-3497

or by net mail to:

                  PEREIRA@SRI-AI (ARPANET)
                  ...!ucbvax!PEREIRA@SRI-AI (UUCP)

------------------------------

Date: Tue 10 Jan 84 15:54:09-MST
From: Subra <Subrahmanyam@UTAH-20.ARPA>
Subject: *** P O P L 1984 --- Announcement ***

*******************************  POPL 1984 *********************************

                              ELEVENTH ANNUAL

                            ACM SIGACT/SIGPLAN

                               SYMPOSIUM ON

                               PRINCIPLES OF

                           PROGRAMMING LANGUAGES


    *** POPL 1984 will be held in Salt Lake City, Utah January 15-18. ****
  (The skiing is excellent, and the technical program threatens to match it!)

For additional details, please contact

        Prof. P. A. Subrahmanyam
        Department of Computer Science
        University of Utah
        Salt Lake City, Utah 84112.

        Phone: (801)-581-8224

ARPANET: Subrahmanyam@UTAH-20 (or Subra@UTAH-20)


------------------------------

Date: 12 Jan 84 4:51:51-PST (Thu)
From: 
Subject: Re: PSU's First AI Course - Comment
Article-I.D.: sjuvax.108

I would rather NOT get into social issues of AI: there are millions of
forums for that (and I myself have all kinds of feelings and reservations
on the issue, including Vedantic interpretations), so let us keep this
one technical, please.

------------------------------

Date: 13 Jan 84 11:42:21-PST (Fri)
From: 
Subject: Net AI course -- the communications channel
Article-I.D.: psuvax.413

Responses so far have strongly favored my creating a moderated newsgroup
as a sub to net.ai for this course.  Most were along these lines:

    From: ukc!srlm (S.R.L.Meira)

    I think you should act as the moderator, otherwise there would be too
    much noise - in the sense of unordered information and discussions -
    and it could finish looking like just another AI newsgroup argument.
    Anybody is of course free to post whatever they want if they feel
    the thing is not coming out like they want.

Also, if the course leads to large volume, many net.ai readers (busy AI
professionals rather than students) might drop out of net.ai.

For a contrasting position:

    From: cornell!nbires!stcvax!lat

    I think the course should be kept as a newsgroup.  I don't think
    it will increase the nation-wide phone bills appreciably beyond
    what already occurs due to net.politics, net.flame, net.religion
    and net.jokes.

So HERE's how I'll try to keep EVERYBODY happy ...    :-)

... a "three-level" communication channel.  1: a "free-for-all" via mail
(or possibly another newsgroup), 2: a moderated newsgroup sub to net.ai,
3: occasional abstracts, summaries, pointers posted to net.ai and AIList.

People can then choose the extent of their involvement and set their own
"bull-rejection threshold".  (1) allows extensive involvement and flaming,
(2) would be the equivalent of attending a class, and (3) makes whatever
"good stuff" evolves from the course available to all others.

The only remaining question: should (1) be done via a newsgroup or mail?

Please send in your votes -- I'll make the final decision next week.

Now down to the REALLY BIG decisions: names.  I suggest "net.ai.cse"
for level (2).  The "cse" can EITHER mean "Computer Science Education"
or abbreviate "course".  For level (1), how about "net.ai.ffa" for
"free-for-all", or .raw, or .disc, or .bull, or whatever.

Whatever I create gets zapped at end of course (June), unless by then it
has taken on a life of its own.

        -- Bob

[PS to those NOT ON USENET: please mail me your address for private
mailings -- and indicate which of the three "participation levels"
best suits your tastes.]

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************

∂17-Jan-84  2348	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #8 
Received: from SRI-AI by SU-AI with TCP/SMTP; 17 Jan 84  23:46:18 PST
Date: Tue 17 Jan 1984 22:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #8
To: AIList@SRI-AI


AIList Digest           Wednesday, 18 Jan 1984      Volume 2 : Issue 8

Today's Topics:
  Programming Languages - Lisp for IBM,
  Intelligence - Subcognition,
  Seminar - Knowledge-Based Design Environment
----------------------------------------------------------------------

Date: Thu 12 Jan 84 15:07:55-PST
From: Jeffrey Mogul <MOGUL@SU-SCORE.ARPA>
Subject: Re: lisp for IBM

                [Reprinted from the SU-SCORE bboard.]

        Does anyone know of LISP implementations for IBM 370--3033--308x?

Reminds me of an old joke:
        How many IBM machines does it take to run LISP?

        Answer: two -- one to send the input to the PDP-10, one
                to get the output back.

------------------------------

Date: Thursday, 12 Jan 1984 21:28-PST
From: Steven Tepper <greep@SU-DSN>
Subject: Re: lisp for IBM

                [Reprinted from the SU-SCORE bboard.]

Well, I used Lisp on a 360 once, but I certainly wouldn't recommend
that version (I don't remember where it came from anyway -- the authors
were probably so embarrassed they wanted to remain anonymous).  It
was, of course, a batch system, and its only output mode was "uglyprint" --
no matter what the input looked like, the output would just be printed
120 columns to a line.

------------------------------

Date: Fri 13 Jan 84 06:55:00-PST
From: Ethan Bradford <JLH.BRADFORD@SU-SIERRA.ARPA>
Subject: LISP (INTERLISP) for IBM

                [Reprinted from the SU-SCORE bboard.]

Chris Ryland (CPR@MIT-XX) sent out a query on this before and he got back
many good responses (he gave me copies).  The main thing most people said
is that a version was developed at Uppsula in Sweden in the 70's.  One
person gave an address to write to, which I transcribe here with no gua-
rantees of currentness:
    Klaus Appel
    UDAC
   Box 2103
    750 02 Uppsala
    Sweden
    Phone: 018-11 13 30

------------------------------

Date: 13 Jan 84  0922 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Lisp for IBM machines

                [Reprinted from the SU-SCORE bboard.]

Standard Lisp runs quite well on the IBM machines.
The folks over at IMSSS on campus know all about it --
they have written several large theorem proving/CAI programs for
that environment.

------------------------------

Date: 11 January 1984 06:27 EST
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: intelligence and genius

I should have thought that if you can make a machine more or
less intelligent; and make another machine ABLE TO RECOGNIZE
GENIUS (it need not itself be able to "be" or "have" genius)
then the "genius machine " problem is probably solved: have the
somewhat intelligent one generate lots of ideas, with random
factors thrown in, and have the second "recognizing" machine
judge the products.
        Obviously they could be combined into one machine.

------------------------------

Date: Sunday, 15 January 1984, 00:18-EST
From: Marek W. Lugowski <MAREK%MIT-OZ@MIT-MC.ARPA>
Subject: Adrressing DRogers' questions (at last) + on subcogniton

    DROGERS (c. November '84):
      I have a few questions I would like to ask, some (perhaps most)
    essentially unanswerable at this time.

Appologies in advance for rashly attempting to answer at this time.

      - Should the initially constructed subcognitive systems be
    "learning" systems, or should they be "knowledge-rich" systems? That
    is, are the subcognitive structures implanted with their knowledge
    of the domain by the programmer, or is the domain presented to the
    system in some "pure" initial state?  Is the approach to
    subcognitive systems without learning advisable, or even possible?

I would go off on a limb and claim that attempting wholesale "learning"
first (whatever that means these days) is silly.  I would think one
would first want to spike the system with hell of a lot of knowledge
(e.g., Dughof's "Slipnet" of related concepts whose links are subject to
cummulative, partial activation which eventually makes the nodes so
connected highly relevant and therefore taken into consideration by the
system).  To repeat Minsky (and probably, most of the AI folk: one can
only learn if one already almost knows it).

      - Assuming human brains are embodiments of subcognitive systems,
    then we know how they were constructed: a very specific DNA
    blueprint controlling the paths of development possible at various
    times, with large assumptions as to the state of the intellectual
    environment.  This grand process was created by trial-and-error
    through the process of evolution, that is, essentially random
    chance. How much (if any) of the subcognitive system must be created
    essentially by random processes? If essentially all, then there are
    strict limits as to how the problem should be approached.

This is an empirical question.  If my now-attempted implementation of
the Copycat Project (which uses the Slipnet described above)
[forthcoming MIT AIM #755 by Doug Hofstadter] will converge nicely, with
trivial tweaking, I'll be inclined to hold that random processes can
indeed do most of the work.  Such is my current, unfounded, belief.  On
the other hand, a failure will not debunk my position--I could always
have messed up implementationally and made bad guesses which "threw"
the system out of its potential convergence.

      - Which processes of the human brain are essentially subcognitive
    in construction, and which use other techniques? Is this balance
    optimal?  Which structures in a computational intelligence would be
    best approached subcognitively, and which by other methods?

Won't even touch the "optimal" question.  I would guess any process
involving a great deal of fan-in would need to be subcognitive in
nature.  This is argued from efficiency.  For now, and for want of
better theories, I'd approach ALL brain functions using subcognitive
models.  The alternative to this at present means von Neumannizing the
brain, an altogether quaint thing to do...

      - How are we to judge the success of a subcognitive system? The
    problems inherent in judging the "ability" of the so-called expert
    systems will be many times worse in this area. Without specific goal
    criteria, any results will be unsatisfying and potentially illusory
    to the watching world.

Performance and plausibility (in that order) ought to be our criteria.
Judging performance accurately, however, will continue to be difficult
as long as we are forced to use current computer architectures.
Still, if a subcognitive system converges at all on a LispM, there's no
reason to damn its performance.  Plausibility is easier to demonstrate;
one needs to keep in touch with the neurosciences to do that.

      - Where will thinking systems REALLY be more useful than (much
   refined) expert systems? I would guess that for many (most?)
   applications, expertise might be preferable to intelligence. Any
   suggestions about fields for which intelligent systems would have a
   real edge over (much improved) expert systems?

It's too early (or, too late?!) to draw such clean lines.  Perhaps REAL
thinking and expertise are much more intertwined than is currently
thought.  Anyway, there is nothing to be gained by pursuing that line of
questioning before WE learn how to explicitly organize knowledge better.


Over all, I defend pursuing things subcognitively for these reasons:

  -- Not expecting thinking to be a cleanly organized, top-down driven
  activity is minimizing one's expectations.  Compare thinking with such
  activities as cellular automata (e.g., The Game of Life) or The Iterated
  Pairwise Prisoner's Dilemma Game to convince yourself of the futility of
  top-down modeling where local rules and their iterated interactions are
  very successful at concisely describing the problem at hand.  No reason
  to expect the brain's top-level behavior to be any easier to explain
  away.

  -- AI has been spending a lot of itself on forcing a von Neumannian
  interpretation on the mind.  At CMU they have it down to an art, with
  Simon's "symbolic information processing" the nowadays proverbial Holy
  Grail.  With all due respect, I'd like to see more research devoted to
  modeling various alleged brain activities with high degree of
  parallelism and probabilistic interaction, systems where "symbols" are
  not givens but intricately invovled intermediates of computation.

  -- It has not been done carefully before and I want at least a thesis
  out of it.

                                -- Marek

------------------------------

Date: Mon, 16 Jan 1984  12:40 EST
From: GLD%MIT-OZ@MIT-MC.ARPA
Subject: minority report


     From: MAREK
     To repeat Minsky (and probably, most of the AI folk: one can
     only learn if one already almost knows it).

By "can only learn if..." do you mean "can't >soon< learn unless...", or
do you mean "can't >ever< learn unless..."?

If you mean "can't ever learn unless...", then the statement has the Platonic
implication that a person at infancy must "already almost know" everything she
is ever to learn.  This can't be true for any reasonable sense of "almost
know".

If you mean "can't soon learn unless...", then by "almost knows X", do you
intend:

 o a narrow interpretation, by which a person almost knows X only if she
   already has knowledge which is a good approximation to understanding X--
   eg, she can already answer simpler questions about X, or can answer
   questions about X, but with some confusion and error; or
 o a broader interpretation, which, in addition to the above, counts as
   "almost knowing X" a situation where a person might be completely in the
   dark about X-- say, unable to answer any questions about X-- but is on the
   verge of becoming an instant expert on X, say by discovering (or by being
   told of) some easy-to-perform mapping which reduces X to some other,
   already-well-understood domain.

If you intend the narrow interpretation, then the claim is false, since people
can (sometimes) soon learn X in the manner described in the broad-
interpretation example.  But if you intend the broad interpretation, then the
statement expands to "one can't soon learn X unless one's current knowledge
state is quickly transformable to include X"-- which is just a tautology.

So, if this analysis is right, the statement is either false, or empty.

------------------------------

Date: Mon, 16 Jan 1984  20:09 EST
From: MAREK%MIT-OZ@MIT-MC.ARPA
Subject: minority report

         From: MAREK
         To repeat Minsky (and probably, most of the AI folk): one can
         only learn if one already almost knows it.

    From: GLD
    By "can only learn if..." do you mean..."can't >ever< learn unless..."?

    If you mean "can't ever learn unless...", then the statement has
    the Platonic implication that a person at infancy must "already almost
    know" everything she is ever to learn.  This can't be true for any
    reasonable sense of "almost know".

I suppose I DO mean "can't ever learn unless".  However, I disagree
with your analysis.  The "Platonic implication" need not be what you
stated it to be if one cares to observe that some of the things an
entity can learn are...how to learn better and how to learn more.  My
original statement presupposes an existence of a category system--a
capacity to pigeonhole, if you will.  Surely you won't take issue with
the hypothesis that an infant's category system is lesser than that of
an adult.  Yet, faced with the fact that many infants do become
adults, we have to explain how the category system can muster to grow
up, as well.

In order to do so, I propose to think that the human learning
is a process where, say, in order to assimilate a chunk of information
one has to have a hundred-, nay, a thousand-fold store of SIMILAR
chunks.  This is by direct analogy with physical growing up--it
happens very slowly, gradually, incrementally--and yet it happens.

If you recall, my original statement was made against attempting
"wholesale learning" as opposed to "knowledge-rich" systems when
building subcognitive sytems.  Admittedly, the complexity of a human
being is many an order of magnitude beyond that what AI will attempt
for decades to come, yet by observing the physical development of a
child we can arrive at some sobbering tips for how to successfully
build complex systems.  Abandoning the utopia of having complex
systems just "self-organize" and pop out of simple interactions of a
few even simplier pieces is one such tip.

                                -- Marek

------------------------------

Date: Tue 17 Jan 84 11:56:01-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT- JANUARY 20, l984

         [Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   January 20, 1984   12:05

LOCATION: Chemistry Gazebo, between Physical & Organic Chemistry

SPEAKER:  Harold Brown
          Stanford University

TOPIC:    Palladio:  An Exploratory Environment for Circuit Design

Palladio is an environment for experimenting with design methodologies
and  knowledge-based  design   aids.   It  provides   the  means   for
constructing, testing  and incrementally  modifying design  tools  and
languages.  Palladio  is  a  testbed for  investigationg  elements  of
design including  specification,  simulation, refinement  and  use  of
previous designs.

For  the  designer,   Palladio  supports  the   construction  of   new
specification languages  particular to  the design  task at  hand  and
augmentation of  the  system's  expert knowledge  to  reflect  current
design goals  and constraints.   For the  design environment  builder,
Palladio provides several  programming paradigms:  rule based,  object
oriented,  data   oriented  and   logical  reasoning   based.    These
capabilities are largely provided by two of the programming systems in
which Palladio is implemented: LOOPS and MRS.

In this talk,  we will  describe the  basic design  concepts on  which
Palladio is  based,  give  examples  of  knowledge-based  design  aids
developed   within   the   environment,   and   describe    Palladio's
implementation.

------------------------------

End of AIList Digest
********************

∂22-Jan-84  1625	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #9 
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Jan 84  16:25:11 PST
Date: Sun 22 Jan 1984 15:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #9
To: AIList@SRI-AI


AIList Digest            Monday, 23 Jan 1984        Volume 2 : Issue 9

Today's Topics:
  AI Culture - Survey Results Available,
  Digests - Vision-List Request,
  Expert Systems - Software Debugging,
  Seminars - Logic Programming & Bagel Architecture,
  Conferences - Principles of Distributed Computing
----------------------------------------------------------------------

Date: 18 Jan 84 14:50:21 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: How AI People Think - Cultural Premises of the AI Community...

                 [Reprinted from the Rutgers bboard.]

How AI People Think - Cultural Premises of the AI Community...
is the name of a report by sociologists at the University of Genoa, Italy,
based on a survey of AI researchers attending the International AI conference
(IJCAI-8) this past summer.  [...]

Smadar.

------------------------------

Date: Wed, 18 Jan 84 13:08:34 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: TO THOSE INTERESTED IN COMPUTER VISION, IMAGE PROCESSING, ETC

        This is the second notice directed to all of those interested
in IMAGE PROCESSING, COMPUTER VISION, etc.  There has been a great need,
and interest, in compiling a VISION list that caters to the specialized
needs and interests of those involved in image/vision processing/theory/
implementation.  I broadcast a message to this effect over this BBOARD
about three weeks ago asking for all those that are interested to
respond.  Again, I reiterate the substance of that message:

        1)  If you are interested in participating in a VISION list,
            and have not already expressed your interest to me,
            please do so!  NOW is the time to express that interest,
            since NOW is when the need for such a list is being
            evaluated.
        2)  I cannot moderate the list (due to a lack of the proper type
            of resources to deal with the increased mail traffic).  A
            moderator is DESPERATELY NEEDED!  I will assist you in
            establishing the list, and I am presently in contact with
            the moderator of AILIST (Ken LAWS@SRI-AI) to establish what
            needs to be done.  The job of moderator involves the
            following:
                i)   All mail for the list is sent to you
                ii)  You screen (perhaps, format or edit, depending upon
                     the time and effort you wish to expend) all
                     incoming messages, then redistribute them to the
                     participants on the list at regular intervals.
                iii) You maintain/update the distribution list.
           Needless to say, the job of moderator is extremely rewarding
           and involves a great deal of high visibility.  In addition,
           you get to GREATLY AID in the dissemination and sharing of
           ideas and information in this growing field.  Enough said...
        3) If you know of ANYONE that might be interested in such a
           list, PLEASE LET THEM KNOW and have them express that interest
           to me by sending mail to KAHN@UCLA-CS.ARPA

                                Now's the time to let me know!
                                Philip Kahn

                        send mail to:  KAHN@UCLA-CS.ARPA

------------------------------

Date: 19 Jan 84 15:14:04 EST
From: Lou <STEINBERG@RUTGERS.ARPA>
Subject: Re: Expert systems for software debugging

I don't know of any serious work in AI on software debugging since
HACKER.  HACKER was a part of the planning work done at MIT some years
ago - it was an approach to planning/automatic programming where
planning was done with a simple planner that, e.g., ignored
interactions between plan steps.  Then HACKER ran the plan/program and
had a bunch of mini-experts that detected various kinds of bugs.  See
Sussman, A Computer Model of Skill Acquisition, MIT Press, 1975.

Also, there is some related work in hardware debugging.  Are you aware
of the work by Randy Davis at MIT and by Mike Genesereth at Stanford on
hardware trouble shooting?  This is the problem where you have a piece
of hardware (e.g. a VAX) that used to work but is now broken, and you
want to isolate the component (board, chip, etc.) that needs to be
replaced.  Of course this is a bit different from program debugging,
since you are looking for a broken component rather than a mis-design.
E.g. for trouble shooting you can usually assume a single thing is
broken, but you often have multiple bugs in a program.

Here at Rutgers, we're working on an aid for design debugging for
VLSI.  Design debugging is much more like software debugging.  Our
basic approach is to use a signal constraint propagation method to
generate a set of possible places where the bug might be, and then use
various sorts of heuristics to prune the set (e.g.  a sub-circuit
that's been used often before is less likely to have a bug than a
brand new one).

------------------------------

Date: Fri, 20 Jan 84 8:39:38 EST
From: Paul Broome <broome@brl-bmd>
Subject: Re:  Expert systems for software debugging?


        Debugging is a black art, not at all algorithmic, but almost totally
        heuristic.  There is a lot of expert knowledge around about how
        to debug faulty programs, but it is rarely written down or
        systemetized.  Usually it seems to reside solely in the minds of
        a few "debugging whizzes".

        Does anyone know of an expert system that assists in software
        debugging? Or any attempts (now or in the past) to produce such
        an expert?

There are some good ideas and a Prolog implementation in Ehud Shapiro's
Algorithmic Program Debugging, which is published as an ACM distinguished
dissertation by MIT Press, 1983.  One of his ideas is "divide-and-query:
a query-optimal diagnosis algorithm," which is essentially a simple binary
bug search.  If the program is incorrect on some input then the program
is divided into two roughly equal subtrees and the computation backtracks
to the midpoint.  If this intermediate result is correct then the
first subtree is ignored and the bug search is repeated on the second
subtree.   If the intermediate result is incorrect then the search
continues instead on the first subtree.

------------------------------

Date: 20 Jan 84 19:25:30-PST (Fri)
From: pur-ee!uiucdcs!nielsen @ Ucb-Vax
Subject: Re: Expert systems for software debuggin - (nf)
Article-I.D.: uiucdcs.4980

The Knowledge Based Programming Assistant Project here at the University of
Illinois was founded as a result of a very similar proposal.
A thesis you may be interested in which explains some of our work is
"GPSI : An Expert System to Aid in Program Debugging" by Andrew Laursen
which should be available through the university.

I would be very interested in corresponding with anyone who is considering
the use of expert systems in program debugging.

                                        Paul Nielsen
                                        {pur-ee, ihnp4}!uiucdcs!nielsen
                                        nielsen@uiucdcs

------------------------------

Date: 01/19/84 22:25:55
From: PLUKEL
Subject: January Monthly Meeting, Greater Boston Chapter/ACM

                 [Forwarded from MIT by SASW@MIT-MC.]


        On behalf of GBC/ACM,  J. Elliott Smith, the Lecture Chairman, is
        pleased to present a discussion on the topic of

                                LOGIC PROGRAMMING

                              Henryk Jan Komorowski
                          Division of Applied Sciences
                               Harvard University
                            Cambridge, Massachusetts

             Dr. Komorowski is an Assistant Professor of Computer Science,
        who  received  his MS from  Warsaw University  and  his Phd  from
        Linkoeping University, Linkoeping, Sweden, in 1981.   His current
        research interests include applications of logic programming  to:
        rapid  prototyping,  programming/specification development envir-
        onments, expert systems, and databases.

             Dr.  Komorowski's  articles have appeared in proceedings  of
        the  IXth  POPL,  the 1980 Logic Programming Workshop  (Debrecen,
        Hungary),  and the book "Logic Programming",  edited by Clark and
        Taernlund.   He  acted  as Program Chairman for the  recent  IEEE
        Prolog tutorial at Brandies University, is serving on the Program
        Committee  of  the  1984 Logic  Programming  Symposium  (Atlantic
        City),  and is a member of the Editorial Board of THE JOURNAL  OF
        LOGIC PROGRAMMING.

             Prolog  has been selected as the programming language of the
        Japanese  Fifth  Generation Computer Project.   It is  the  first
        realization of logic programming ideas,  and implements a theorem
        prover  based  on a design attributed  to  J.A.  Robinson,  which
        limits resolution to a Horn clause subset of assertions.

             A  Prolog program is a collection of true statements in  the
        form  of RULES.   A computation is a proof from these assertions.
        Numerous   implementations  of  Prolog  have   elaborated   Alain
        Colmerauer's original, including Dr. Komorowski's own Qlog, which
        operates in LISP environments.

             Dr.  Komorowski  will present an introduction to  elementary
        logic  programming  concepts  and an overview  of  more  advanced
        topics,    including   metalevel   inference,    expert   systems
        programming, databases, and natural language processing.

                                 DATE:     Thursday, 26 January 1984
                                 TIME:     8:00 PM
                                 PLACE:    Intermetrics Atrium
                                           733 Concord Avenue
                                           Cambridge, MA
                                         (near Fresh Pond Circle)

                COMPUTER MOVIE and REFRESHMENTS before the talk.
                 Lecture dinner at 6pm open to all GBC members.
                   Call (617) 444-5222 for additional details.

------------------------------

Date: 20 Jan 84  1006 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Shaprio Seminars at Stanford and Berkeley

      [Adapted from the SU-SCORE bboard and the Prolog Digest.]


  Ehud Shapiro, The Weizmann Institute of Science
  The Bagel: A Systolic Concurrent Prolog Machine

  4:30pm, Terman Auditorium, Tues, Jan 24th, Stanford CSD Colloq.
  1:30pm, Evans 597, Wed., Jan 2th, Berkeley Prolog Seminar



It is argued that explicit mapping of processes to processors is
essential to effectively program a general-purpose parallel computer,
and, as a consequence, that the kernel language of such a computer
should include a process-to-processor mapping notation.

The Bagel is a parallel architecture that combines concepts of
dataflow, graph-reduction and systolic arrays. The Bagel's kernel
language is Concurrent Prolog, augmented with Turtle programs as a
mapping notation.

Concurrent Prolog, combined with Turtle programs, can easily implement
systolic systems on the Bagel. Several systolic process structures are
explored via programming examples, including linear pipes (sieve of
Erasthotenes, merge sort, natural-language interface to a database),
rectangular arrays (rectangular matrix multiplication, band-matrix
multiplication, dynamic programming, array relaxation), static and
dynamic H-trees (divide-and-conquer, distributed database), and
chaotic structures (a herd of Turtles).

All programs shown have been debugged using the Turtle graphics Bagel
simulator, which is implemented in Prolog.

------------------------------

Date: Fri 20 Jan 84 14:56:58-PST
From: Jayadev Misra <MISRA@SU-SIERRA.ARPA>
Subject: call for Papers- Principles of Distributed Computing


                         CALL FOR PAPERS
3rd ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC)

                        Vancouver, Canada
                      August 27 - 29, 1984

This conference will address fundamental issues in the theory  and
practice   of   concurrent  and  distributed  systems.   Original
research papers describing theoretical or  practical  aspects  of
specification.  design  or  implementation  of  such  systems are
sought.  Topics of interest include, but are not limited to,  the
following aspects of concurrent and distributed systems.

  . Algorithms
  . Formal models of computations
  . Methodologies for program development
  . Issues in specifications, semantics and verifications
  . Complexity results
  . Languages
  . Fundamental results in application areas such as
                distributed databases, communication protocols, distributed
                operating systems, distributed transaction processing systems,
                real time systems.

Please send eleven copies of a detailed abstract (not a  complete
paper) not exceeding 10 double spaced typewritten pages, by MARCH
8, 1984, to the Program Chairman:

  Prof. J. Misra
  Computer Science Department
  University of Texas
  Austin, Texas 78712

The abstract must include a clear description of the problem  be-
ing  addressed, comparisons with extant work and a section on ma-
jor original contributions of this work.  The abstract must  pro-
vide  sufficient detail for the program committee to make a deci-
sion.  Papers will be chosen on the basis  of  scientific  merit,
originality, clarity and appropriateness for this conference.

Authors will be notified of acceptance by April  30,  1984.   Ac-
cepted  papers,  typed on special forms, are due at the above ad-
dress by June 1, 1984.  Authors of accepted papers will be  asked
to sign ACM Copyright forms.

The Conference Chairman is Professor  Tiko  Kameda  (Simon  Fraser
University).   The Publicity Chairman is Professor Nicola Santoro
(Carleton University).  The Local Arrangement Chiarman is Profes-
sor Joseph Peters (Simon Fraser University).  The Program Commit-
tee consists of Ed Clarke (C.M.U.), Greg  N.  Frederickson  (Pur-
due),  Simon Lam (U of Texas, Austin), Leslie Lamport (SRI Inter-
national), Michael Malcom (U  of  Waterloo),  J.  Misra,  Program
Chairman  (U of Texas, Austin), Hector G. Molina (Princeton), Su-
san Owicki (Stanford), Fred Schneider (Cornell),  H.  Ray  Strong
(I.B.M. San Jose), and Howard Sturgis (Xerox Parc).

------------------------------

End of AIList Digest
********************

∂30-Jan-84  2209	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #10
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Jan 84  22:08:55 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Thu 26 Jan 1984 14:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #10
To: AIList@SRI-AI


AIList Digest            Friday, 27 Jan 1984       Volume 2 : Issue 10

Today's Topics:
  AI Culture - IJCAI Survey,
  Cognition - Parallel Processing Query,
  Programming Languages - Symbolics Support & PROLOG/ZOG Request,
  AI Software - KEE Knowledge Representation System,
  Review - Rivest Forsythe Lecture on Learning,
  Seminars - Learning with Constraints & Semantics of PROLOG,
  Courses - CMU Graduate Program in Human-Computer Interaction
----------------------------------------------------------------------

Date: 24 Jan 84 12:19:21 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Report on "How AI People Think..."

I received a free copy because I attended IJCAI.  I have an address
here, but I don't know if it is the appropriate one for ordering this
report:

Re: the report "How AI People Think - Cultural Premises of the AI community"
Commission of the European Communities
Rue de la Loi, 200
B-1049 Brussels, Belgium

(The report was compiled by Massimo Negrotti, Chair of Sociology of
 Knowledge, University of Genoa, Italy)

Smadar (KEDAR-CABELLI@RUTGERS).

------------------------------

Date: Wed 18 Jan 84 11:05:26-PST
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: brain, a parallel processor ?

What are the evidences that the brain is a parallel processor?  My own
introspection seem to indicate that mine is doing time-sharing.  That is
I can follow only one idea at a time, but with a lot of switching
between reasoning paths (often more non directed than controlled
switching). Have different people different processors ? Or is the brain
able to function in more than one way (parallel, serial, time-sharing) ??

Rene (bach@sumex)

------------------------------

Date: Wed, 25 Jan 84 15:37:39 CST
From: Mike Caplinger <mike@rice>
Subject: Symbolics support for non-Lisp languages

[This is neither an AI nor a graphics question per se, but I thought
these lists had the best chance of reaching Symbolics users...]

What kind of support do the Symbolics machines provide for languages
other than Lisp?  Specifically, are there interactive debugging
facilities for Fortran, Pascal, etc.?  It's my understanding that the
compilers generate Lisp output.  Is this true, and if so, is the
interactive nature of Lisp exploited, or are the languages just
provided as batch compilers?  Finally, does anyone have anything to say
about efficiency?

Answers to me, and I'll summarize if there's any interest.  Thanks.

------------------------------

Date: Wed 25 Jan 84 09:38:25-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: KEE Representation System

The Jan. issue of IEEE Computer Graphics reports the following:

Intelligenetics has introduced the Knowledge Engineering Environment
AI software development system for AI professionals, computer
scientists, and domain specialists.  The database management program
development system is graphics oriented and interactive, permitting
use of a mouse, keyboard, command-option menus, display-screen
windows, and graphic symbols.

KEE is a frame-based representation system that provides support
for descriptive and procedural knowledge representation, and a
declarative, extendable formalism for controlling inheritance of
attributes and attribute values between related units of
knowledge.  The system provides support for multiple inheritance
hierarchies; the use of user-extendable data types to promote
knowledge-base integrity; object-oriented programming; multiple-
inference engines/rule systems; and a modular system design through
multiple knowledge bases.

The first copy of KEE sells for $60,000; the second for $20,000.
Twenty copies cost $5000 each.

------------------------------

Date: 01/24/84 12:08:36
From: JAWS@MIT-MC
Subject: PROLOG and/or ZOG for TOPS-10

Does anyone out there know where I can get a version of prolog and/or
ZOG to that will run on a DEC-10 (7.01)?  The installation is owned by the
US government, albeit beneign (DOT).

                                THANX JAWS@MC

------------------------------

Date: Tue 24 Jan 84 11:26:14-PST
From: Armar Archbold <ARCHBOLD@SRI-AI.ARPA>
Subject: Rivest Forsythe Lecture on Learning

[The following is a review of a Stanford talk, "Reflections on AI", by
Dr. Ron Rivest of MIT.  I have edited the original slightly after getting
Armar's permission to pass it along.  -- KIL]

Dr. Rivest's  talk  emphasized  the interest of small-scale studies of
learning through experience (a "critter"  with  a  few  sensing  and
effecting operations building up a world model of a blocks environment).
He stressed such familiar themes as

   - "the evolutionary function and value of world  models  is  predicting
     the  future,  and  consequently  knowledge is composed principally of
     expectations, possibilities, hypotheses -  testable  action-sensation
     sequences, at the lowest level of sophistication",

   - "the  field  of  AI  has  focussed  more  on 'backdoor AI', where you
     directly  program  in   data   structures   representing   high-level
     knowledge,  than  on  'front-door' AI, which studies how knowledge is
     built up from non-verbal experience, or 'side door AI', which studies
     how knowledge might be gained through teaching and instruction  using
     language;

   - such a study of simple learning systems in a simple environment -- in
     which an agent with a given  vocabulary  but  little  or  no  initial
     knowledge  ("tabula  rasa")  investigates  the  world (either through
     active experiementation or through changes imposed  by  perturbations
     in  the  surroundings)  and  attempts  to  construct a useful body of
     knowledge   through   recognition   of   identities,    equivalences,
     symmetries,  homomorphisms,  etc.,  and  eventually  metapatterns, in
     action-sensation chains (represented perhaps in dynamic logic) --  is
     of considerable interest.

Such concepts are not new. There have been many mathematical studies,
psychological similations, and AI explorations along the lines since the
50s.  At SRI, Stan Rosenschein was playing around with a simplified learning
critter about a year ago; Peter Cheeseman shares Rivest's interest in
Jaynes' use of entropy calculations to induce safe hypotheses in an
overwhelmingly profuse space of possibilities.  Even so, these concerns
were worth having reactivated by a talk.  The issues raised by some of the
questions from the audience were also intesting, albeit familiar:

   - The critter which starts out with a tabula rasa  will  only  make  it
     through  the  enormous  space  of  possible  patterns induceable from
     experience if it initially "knows" an awful lot about how  to  learn,
     at  whatever  level  of  procedural  abstraction  and/or  "primitive"
     feature selection (such as that done at the level of the eye itself).

   - Do we call intelligence the procedures that permit one to gain useful
     knowledge (rapidly), or the knowledge thus gained, or what mixture of
     both?

   - In addition, there is the question  of  what  motivational  structure
     best furthers the critter's education.  If the critter attaches value
     to  minimum  surprise (various statistical/entropy measures thereof),
     it can sit in a corner and do nothing, in which case it may  one  day
     suddenly  be very surprised and very dead.  If it attaches tremendous
     value to surprise, it could just flip a coin and always  be  somewhat
     surprised.    The  mix  between repetition (non-surprise/confirmatory
     testing) and exploration which produces the best cognitive system  is
     a  fundamental  problem.   And there is the notion of "best" - "best"
     given the critter's values other than curiosity, or "best"  in  terms
     of  survivability,  or  "best"  in  a  kind  of  Occam's  razor sense
     vis-a-vis truth (here it was commented you could rank Carnapian world
     models based on the  simple  primitive  predicates  using  Kolmogorov
     complexity measures, if one could only calculate the latter...)

   - The  success  or  failure  of the critter to acquire useful knowledge
     depends very much on the particular world it is placed in.    Certain
     sequences  of  stimuli will produce learning and others won't, with a
     reasonable, simple learning procedure.  In simple artificial  worlds,
     it  is possible to form some kind of measure of the complexity of the
     environment by seeing what the minimum length action-sensation chains
     are which are true regularities.  Here there is  another  traditional
     but  fascinating question: what are the best worlds for learning with
     respect to  critters  of  a  given  type  -  if  the  world  is  very
     stochastic,  nothing  can  be learned in time; if the world is almost
     unchanging, there is little motivation to learn and  precious  little
     data about regular covariances to learn from.

     Indeed,  in  psychological studies, there are certain sequences which
     will bolster reliance on certain conclusions to such an  extent  that
     those    conclusions    become    (illegitimately)   protected   from
     disconfirmation.  Could one recreate this phenomenon  with  a  simple
     learning  critter  with a certain motivational structure in a certain
     kind of world?

Although these issues seemed familiar, the talk certainly could stimulate
the general public.

                                                                 Cheers - Armar

------------------------------

Date: Tue 24 Jan 84 15:45:06-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT - FRIDAY, January 27, 1984

           [Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   January 27, 1984
Chemistry Gazebo, between Physical & Organic Chemistry
12:05

SPEAKER:  Tom Dietterich, HPP
          Stanford University

TOPIC:    Learning with Constraints

In attempting to construct a program  that can learn the semantics  of
UNIX commands, several shortcomings of existing AI learning techniques
have been  uncovered.  Virtually  all  existing learning  systems  are
unable to (a)  perform data  interpretation in a  principled way,  (b)
form theories about systems that contain substantial amounts of  state
information, (c) learn from  partial data, and (d)  learn in a  highly
incremental fashion.  This talk  will describe these shortcomings  and
present techniques  for overcoming  them.  The  basic approach  is  to
employ a vocabulary of constraints to represent partial knowledge  and
to apply  constraint-propagation techniques  to draw  inferences  from
this partial knowledge.  These techniques  are being implemented in  a
system called, EG,  whose task is  to learn the  semantics of 13  UNIX
commands (ls, cp,  mv, ln, rm,  cd, pwd, chmod,  umask, type,  create,
mkdir, rmdir) by watching "over-the-shoulder" of a teacher.

------------------------------

Date: 01/25/84 17:07:14
From: AH
Subject: Theory of Computation Seminar

                       [Forwarded from MIT-MC by SASW.]


                           DATE:  February 2nd, 1984
                           TIME:  3:45PM  Refreshments
                                  4:00PM  Lecture
                          PLACE:  NE43-512A

           "OPERATIONAL AND DENOTATIONAL SEMANTICS FOR P R O L O G"

                                      by

                                 Neil D. Jones
                              Datalogisk Institut
                             Copenhagen University

                                   Abstract

  A PROLOG program can go into an infinite loop even when there exists a
refutation of its clauses by resolution theorem proving methods.  Conseguently
one can not identify resolution of Horn clauses in first-order logic with
PROLOG  as it is actually used, namely, as a deterministic programming
language.  In this talk two "computational" semantics of PROLOG will be given.
One is operational and is expressed as an SECD-style interpreter which is
suitable for computer implementation.  The other is a Scott-Strachey style
denotational semantics.  Both were developed from the SLD-refutation procedure
of Kowalski and APT and van Embden, and both handle "cut".

HOST:  Professor Albert R. Meyer

------------------------------

Date:     Wednesday, 25 Jan 84 23:47:29 EST
From:     reiser (brian reiser) @ cmu-psy-a
Reply-to: <Reiser%CMU-PSY-A@CMU-CS-PT>
Subject:  Human-Computer Interaction Program at CMU

                         ***** ANNOUNCEMENT *****

              Graduate Program in Human-Computer Interaction
                       at Carnegie-Mellon University

The  field  of  human-computer  interaction  brings  to  bear  theories and
methodologies from cognitive psychology and computer science to the  design
of   computer   systems,   to   instruction   about   computers,   and   to
computer-assisted instruction.  The new Human-Computer Interaction  program
at  CMU is geared toward the development of cognitive models of the complex
interaction between learning, memory, and language mechanisms  involved  in
using  computers.    Students  in  the  program  apply their psychology and
computer science  training  to  research  in  both  academic  and  industry
settings.

Students in the Human-Computer Interaction program design their educational
curricula  with  the  advice  of  three  faculty  members  who serve as the
student's committee.  The intent  of  the  program  is  to  guarantee  that
students   have  the  right  combination  of  basic  and  applied  research
experience and coursework so that they  can  do  leading  research  in  the
rapidly developing field of human-computer interaction.  Students typically
take  one  psychology  course and one computer science course each semester
for the first two years.  In addition, students participate in a seminar on
human-computer interaction held during the summer  of  the  first  year  in
which  leading  industry  researchers are invited to describe their current
projects.

Students are also actively involved in research throughout  their  graduate
career.    Research  training  begins  with  a collaborative and apprentice
relationship with a faculty member in laboratory research for the first one
or two years of the program.  Such involvement allows the  student  several
repeated   exposures  to  the  whole  sequence  of  research  in  cognitive
psychology and computer science, including conceptualization of a  problem,
design   and   execution   of   experiments,  analyzing  data,  design  and
implementation of computer systems, and writing scientific reports.

In the second half  of  their  graduate  career,  students  participate  in
seminars,  teaching,  and  an  extensive  research project culminating in a
dissertation.  In addition, an important component  of  students'  training
involves  an  internship working on an applied project outside the academic
setting.  Students and faculty in the  Human-Computer  Interaction  program
are  currently studying many different cognitive tasks involving computers,
including: construction of algorithms, design of instruction  for  computer
users,  design of user-friendly systems, and the application of theories of
learning and problem solving to the design of systems for computer-assisted
instruction.

Carnegie-Mellon University is exceptionally well suited for  a  program  in
human-computer   interaction.    It  combines  a  strong  computer  science
department with a strong  psychology  department  and  has  many  lines  of
communication  between  them.   There are many shared seminars and research
projects.  They also share in a computational community defined by a  large
network  of  computers.  In addition, CMU and IBM have committed to a major
effort to integrate personal computers into college education.    By  1986,
every  student  on  campus  will  have a powerful state-of-the-art personal
computer.  It is anticipated that members of the Human-Computer Interaction
program will be involved in various aspects of this effort.

The  following  faculty  from  the  CMU  Psychology  and  Computer  Science
departments  are  participating  in the Human-Computer Interaction Program:
John R. Anderson, Jaime G. Carbonell, John  R. Hayes,  Elaine  Kant,  David
Klahr,  Jill  H. Larkin, Philip L. Miller, Alan Newell, Lynne M. Reder, and
Brian J. Reiser.

Our   deadline   for   receiving   applications,   including   letters   of
recommendation,  is  March  1st.  Further information about our program and
application materials may be obtained from:

     John R. Anderson
     Department of Psychology
     Carnegie-Mellon University
     Pittsburgh, PA  15213

------------------------------

End of AIList Digest
********************

∂02-Feb-84  0229	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #11
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Feb 84  02:28:51 PST
Date: Tue 31 Jan 1984 10:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #11
To: AIList@SRI-AI


AIList Digest            Tuesday, 31 Jan 1984      Volume 2 : Issue 11

Today's Topics:
  Techniques - Beam Search Request,
  Expert Systems - Expert Debuggers,
  Mathematics - Arnold Arnold Story,
  Courses - PSU Spring AI Mailing Lists,
  Awards - Fredkin Prize for Computer Math Discovery,
  Brain Theory - Parallel Processing,
  Intelligence - Psychological Definition,
  Seminars - Self-Organizing Knowledge Base, Learning, Task Models
----------------------------------------------------------------------

Date: 26 Jan 1984 21:44:11-EST
From: Peng.Si.Ow@CMU-RI-ISL1
Subject: Beam Search

I would be most grateful for any information/references to studies and/or
applications of Beam Search, the search procedure used in HARPY.

                                                        Peng Si Ow
                                                      pso@CMU-RI-ISL1

------------------------------

Date: 25 Jan 84 7:51:06-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!erh @ Ucb-Vax
Subject: Expert debuggers
Article-I.D.: uvacs.1148

        See also "Sniffer: a system that understands bugs", Daniel G. Shapiro,
MIT AI Lab Memo AIM-638, June 1981
        (The debugging knowledge of Sniffer is organized as a bunch of tiny
experts, each understanding a specific type of error.  The program has an in-
depth understanding of a (very) limited class of errors.  It consists of
a cliche-finder and a "time rover".  Master's thesis.)

------------------------------

Date: Thursday, 26-Jan-84  19:11:37-GMT
From: BILL (on ERCC DEC-10) <Clocksin%edxa@ucl-cs.arpa>
Reply-to: Clocksin <Clocksin%edxa@ucl-cs.arpa>
Subject: AIList entry

In reference to a previous AIList correspondent wishing to know more about
Arnold Arnold's "proof" of Fermat's Last Theorem, last week's issue of
New Scientist explains all.  The "proof" is faulty, as expected.
Mr Arnold is a self-styled "cybernetician" who has a history of grabbing
headlines with announcements of revolutionary results which are later
proven faulty on trivial grounds.  I suppose A.I. has to put up with
its share of circle squarers and angle trisecters.

------------------------------

Date: 28 Jan 84 18:23:09-PST (Sat)
From: ihnp4!houxm!hocda!hou3c!burl!clyde!akgua!sb1!sb6!bpa!burdvax!psu
      vax!bobgian@Ucb-Vax
Subject: PSU Spring AI mailing lists
Article-I.D.: psuvax.433

I will be using net.ai for occasionally reporting "interesting" items
relating to the PSU Spring AI course.

If anybody would also like "administrivia" mailings (which could get
humorous at times!), please let me know.

Also, if you want to be included on the "free-for-all" discussion list,
which will include flames and other assorted idiocies, let me know that
too.  Otherwise you'll get only "important" items.

The "official Netwide course" (ie, net.ai.cse) will start up in a month
or so.  Meanwhile, you are welcome to join the fun via mail!

Bob

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: 26 Jan 84 19:39:53 EST
From: AMAREL@RUTGERS.ARPA
Subject: Fredkin Prize for Computer Math Discovery

                 [Reprinted from the RUTGERS bboard.]

Fredkin Prize to be Awarded for Computer Math Discovery

LOUISVILLE,  Ky.--The  Fredkin  Foundation  will award a $100,000 prize for the
first computer to make a major mathematical discovery, it was  announced  today
(Jan. 26).

Carnegie-Mellon  University  has  been  named trustee of the "Fredkin Prize for
Computer Discovery in Mathematics", according to Raj  Reddy,  director  of  the
university's  Robotics  Institute,  and a trustee of IJCAI (International Joint
Council on Artificial Intelligence) responsible for AI prizes.  Reddy said  the
prize  will be awarded "for a mathematical work of distinction in which some of
the pivotal ideas have been found automatically by a computer program in  which
they were not initially implicit."

"The criteria for awarding this prize will be widely publicized and reviewed by
the  artificial  intelligence  and  mathematics  communities to determine their
adequacy," Reddy said.

Dr. Woody Bledsoe of the University of Texas at Austin will head a committee of
experts  who  will  define  the  rules  of  the  competition.      Bledsoe   is
president-elect of the American Association for Artificial Intelligence.

"It  is  hoped,"  said  Bledsoe,  "that  this  prize  will stimulate the use of
computers in mathematical research and have a good long-range effect on all  of
science."

The  committee  of mathematicians and computer scientists which will define the
rules of the competition includes:  William Eaton of the University of Texas at
Austin, Daniel  Gorenstein  of  Rutgers  University,  Paul  Halmos  of  Indiana
University,  Ken  Kunen  of  the  University of Wisconsin, Dan Mauldin of North
Texas State University and John McCarthy of Stanford University.

Also, Hugh Montgomery of the University of Michigan, Jack Schwartz of New  York
University,  Michael  Starbird  of  the  University  of  Texas  at  Austin, Ken
Stolarsky of  the  University  of  Illinois  and  Francois  Treves  of  Rutgers
University.

The  Fredkin Foundation has a similar prize for a world champion computer chess
system.  Recently, $5,000 was awarded to Ken Thompson and Joseph  Condon,  Bell
Laboratories  researchers  who developed the first computer system to achieve a
Master rating in tournament chess.

------------------------------

Date: 26 Jan 84 15:34:50 PST (Thu)
From: Mike Brzustowicz <mab@aids-unix>
Subject: Re: Rene Bach's query on parallel processing in the brain

What happens when something is "on the tip of your tounge"  but is beyond
recall.  Often (for me at least)  if the effort to recall is displaced
by some other cognitive activity, the searched-for information "pops-up"
at a later time.  To me, this suggests at least one background process.

                                -Mike (mab@AIDS-UNIX)

------------------------------

Date: Thu, 26 Jan 84 17:19:30 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: How my brain works

I find that most of what my brain does is pattern interpretation.  I receive
various sensory input in the form of various kinds of vibrations (i.e.
eletromagnetic and acoustic) and my brain perceives patterns in this muck.
Then it attaches meanings to the patterns.  Within limits, I can attach these
meanings at will.  The process of logical deduction a la Socrates takes up
a negligible time-slice in the CPU.

  --Charlie

------------------------------

Date: Fri, 27 Jan 84 15:35:21 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Re: How my brain works

I see what you mean about the question as to whether the brain is a parallel
processor in consious reasoning or not.  I also feel like a little daemon that
sits and pays attention to different lines of thought at different times.

An interesting counterexample is the aha! phenomenon.  The mathematician
Henri Poincare, among others, has written an essay about his experience of
being interrupted from his conscious attention somehow and becoming instantly
aware of the solution to a problem he had "given up" on some days before.
It was as though some part of his brain had been working on the problem all
along even though he had not been aware of it.  When it had gotten the solution
an interrupt occurred and his conscious mind was triggered into the awareness
of  the solution.

  --Charlie

------------------------------

Date: Mon 30 Jan 84 09:47:49-EST
From: Alexander Sen Yeh <AY@MIT-XX.ARPA>
Subject: Request for Information

I am getting started on a project which combines symbolic artificial
intelligence and image enhancement techniques.  Any leads on past and
present attempts at doing this (or at combining symbolic a.i. with
signal processing or even numerical methods in general) would be
greatly appreciated.  I will send a summary of replies to AILIST and
VISION LIST in the future.  Thanks.

--Alex Yeh
--electronic mail: AY@MIT-XX.ARPA
--US mail: Rm. 222, 545 Technology Square, Cambridge, MA 02139

------------------------------

Date: 30 January 1984 1554-est
From: RTaylor.5581i27TK @ RADC-MULTICS
Subject: RE:  brain, a parallel processor ?

I agree that based on my own observations, my brain appears to be
working more like a time-sharing unit...complete with slow downs,
crashes, etc., due to overloading the inputs by fatigue, poor maintenance,
and numerous inputs coming too fast to be covered by the
time-sharing/switching mechanism!
                              Roz

------------------------------

Date: Monday, 30 Jan 84 14:33:07 EST
From: shrager (jeff shrager) @ cmu-psy-a
Subject: Psychological Definition of (human) Intelligence

Recommended reading for persons interested in a psychological view of
(human) intelligence:

Sternberg, R.J. (1983) "What should intelligence tests test?  Implications
 of a triarchic theory of intelligence for intelligence testing."  in
 Educational Researcher, Jan 1984.  Vol. 13 #1.

This easily read article (written for educational researchers) reviews
Sternberg's current view of what makes intelligent persons intelligent:

"The triarchic theory accounts for why IQ tests work as well as they do
 and suggests ways in which they might be improved...."

Although the readership of this list are probably not interested in IQ tests
per se, Sternberg is the foremost cognitive psychologist concerned directly
with intelligence so his view of "What is intelligence?" will be of interest.
This is reviewed quite nicely in the cited paper:

"The triachric theory of human intelligence comprises three subtheories.  The
first relates intelligence to the internal world of the individual,
specifying the mental mechanisms that lead to more and less intelligent
behavior.  This subtheory specifies three kinds of information processing
components that are instrumental in (a) learning how to do things, (b)
planning what to do and how to do them, and in (c) actually doing them. ...
The second subtheory specifies those points along the continuum of one's
experience with tasks or situations that most critically involve the use of
intelligence.  In particular, the account emphasizes the roles of novelty
(...) and of automatization (...) in intelligence.  The third subtheory
relates intelligence to the external world of the individual, specifying
three classes of acts -- environmental adaptation, selection, and shaping --
that characterize intelligent behavior in the everyday world."

There is more detail in the cited article.

(Robert J. Sternberg is professor of Psychology at Yale University.  See
also, his paper in Behavior and Flame Sciences (1980, 3, 573-584): "Sketch of
a componential subtheory of human intelligence." and his book (in press with
Cambridge Univ. Press): "Beyond IQ: A triarchic theory of human
intelligence.")

------------------------------

Date: Thu 26 Jan 84 14:11:55-CST
From: CS.BUCKLEY@UTEXAS-20.ARPA
Subject: Database Seminar

                [Reprinted from the UTEXAS-20 bboard.]

    4-5 Wed afternoon in Pai 5.60 [...]

    Mail-From: CS.LEVINSON created at 23-Jan-84 15:47:25

    I am developing a system which will serve as a self-organizing
    knowledge base for an expert system. The knowledge base is currently
    being developed to store and retrieve Organic Chemical reactions. As
    the fundamental structures of the system are merely graphs and sets,
    I am interested in finding other domains is which the system could be used.

    Expert systems require a large amount of knowledge in order to perform
    their tasks successfully. In order for knowledge to be useful for the
    expert task it must be characterized accurately. Data characterization
    is usually the responsibility of the system designer and the
    consulting experts. It is my belief that the computer itself can be
    used to help characterize and classify its knowledge. The system's
    design is based on the assumption that the key to knowledge
    characterization is pattern recognition.

------------------------------

Date: 28 Jan 84 21:25:17 EST
From: MSIMS@RUTGERS.ARPA
Subject: Machine Learning Seminar Talk by R. Banerji

                 [Reprinted from the RUTGERS bboard.]

                MACHINE LEARNING SEMINAR

Speaker:        Ranan Banerji
                St. Joseph's University, Philadelphia, Pa. 19130

Subject:        An explanation of 'The Induction of Theories from
                Facts' and its relation to LEX and MARVIN


In Ehud Shapiro's Yale thesis work he presented a framework for
inductive inference in logic, called the incremental inductive
inference algorithm.  His Model Inference System was able to infer
axiomatizations of concrete models from a small number of facts in a
practical amount of time.  Dr. Banerji will relate Shapiro's work to
the kind of inductive work going on with the LEX project using the
version space concept of Tom Mitchell, and the positive focusing work
represented by Claude Sammut's MARVIN.

Date:           Monday, January 30, 1984
Time:           2:00-3:30
Place:          Hill 7th floor lounge (alcove)

------------------------------

Date: 30 Jan 84  1653 PST
From: Terry Winograd <TW@SU-AI>
Subject: Talkware seminar Mon Feb 6, Tom Moran (PARC)

                [Reprinted from the SU-SCORE bboard.]

Talkware Seminar (CS 377)

Date: Feb 6
Speaker: Thomas P. Moran, Xerox PARC
Topic: Command Language Systems, Conceptual Models, and Tasks
Time: 2:15-4
Place: 200-205

Perhaps the most important property for the usability of command language
systems is consistency.  This notion usually refers to the internal
(self-) consistency of the language.  But I would like to reorient the
notion of consistency to focus on the task domain for which the system
is designed.  I will introduce a task analysis technique, called
External-Internal Task (ETIT) analysis.  It is based on the idea that
tasks in the external world must be reformulated in to the internal
concepts of a computer system before the system can be used.  The
analysis is in the form of a mapping between sets of external tasks and
internal tasks.  The mapping can be either direct (in the form of rules)
or "mediated" by a conceptual model of how the system works.  The direct
mapping shows how a user can appear to understand a system, yet have no
idea how it "really" works.  Example analyses of several text editing
systems and, for contrast, copiers will be presented; and various
properties of the systems will be derived from the analysis.  Further,
it is shown how this analysis can be used to assess the potential
transfer of knowledge from one system to another, i.e., how much knowing
one system helps with learning another.  Exploration of this kind of
analysis is preliminary, and several issues will be raised for
discussion.

------------------------------

End of AIList Digest
********************

∂03-Feb-84  2358	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #12
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Feb 84  23:57:46 PST
Date: Fri  3 Feb 1984 22:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #12
To: AIList@SRI-AI


AIList Digest            Saturday, 4 Feb 1984      Volume 2 : Issue 12

Today's Topics:
  Hardware - Lisp Machine Benchmark Request,
  Machine Translation - Request,
  Mathematics - Fermat's Last Theorem & Four Color Request,
  Alert - AI Handbooks & Constraint Theory Book,
  Expert Systems - Software Debugging Correction,
  Course - PSU's Netwide AI Course,
  Conferences -  LISP Conference Deadline & Cybernetics Congress
----------------------------------------------------------------------

Date: Wed, 1 Feb 84 16:37:00 cst
From: dyer@wisc-ai (Chuck Dyer)
Subject: Lisp Machines

Does anyone have any reliable benchmarks comparing Lisp
machines, including Symbolics, Dandelion, Dolphin, Dorado,
LMI, VAX 780, etc?

Other features for comparison are also of interest.  In particular,
what capabilities are available for integrating a color display
(at least 8 bits/pixel)?

------------------------------

Date: Thu 2 Feb 84 01:54:07-EST
From: Andrew Y. Chu <AYCHU@MIT-XX.ARPA>
Subject: language translator

                     [Forwarded by SASW@MIT-ML.]

Hi, I am looking for some information on language translation
(No, not fortran->pascal, like english->french).
Does anyone in MIT works on this field? If not, anyone in other
schools? Someone from industry ? Commercial product ?
Pointer to articles, magazines, journals etc. will be greatly appreciated.

Please reply to aychu@mit-xx. I want this message to reach as
many people as possible, are there other bboards I can send to?
Thanx.

------------------------------

Date: Thu, 2 Feb 84 09:48:48 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Fermat's Last Theorem

Fortunately (or unfortunately) puzzles like Fermat's Last Theorem, Goldbach's
conjecture, the 4-color theorem, and others are not in the same class as
the geometric trisection of an angle or the squaring of a circle.  The former
class may be undecidable propositions (a la Goedel) and the latter are merely
impossible.  Since one of the annoying things about undecidable propositions
is that it cannot be decided whether or not they are decidable, (Where are
you, Doug Hofstader, now that we need you?) people seriously interested in
these candidates for undecidablilty should not dismiss so-called theorem
provers like A. Arnold without looking at their work.

I have heard that the ugly computer proof(?) of the 4-color theorem that
appeared in Scientific American is incorrect, i.e. not a proof.  I also
have heard that one G. Spencer-Brown has proved the 4-color theorem.  I
do not know whether either of these things is true and it's bugging me!
Is the 4-color theorem undecidable or not?

  --Charlie

------------------------------

Date: 30 Jan 84 19:48:36-PST (Mon)
From: pur-ee!uiucdcs!uicsl!keller @ Ucb-Vax
Subject: AI Handbooks only .95
Article-I.D.: uiucdcs.5251

        Several people here have joined "The Library of Computer and
Information Sciences Book Club" because they have an offer of the complete
AI Handbook set (3 vols) for $3.95 instead of the normal $100.00. I got mine
and they are the same production as non book club versions. You must buy
three more books during the comming year and it will probably be easy to
find ones that you want. Here's the details:

Send to: The Library of Computer and Information Sciences
         Riverside NJ 08075

Copy of Ad:
Please accept my application for trial membership in the Library of Computer
and Information Sciences and send me the 3-volume HANDBOOK OF ARTIFICIAL
INTELLIGENCE (10079) billing me only $3.95. I agree to purchase at least
three additional Selections or Alternates over the next 12 months. Savings
may range up to 30% and occasionally even more. My membership is cancelable
any time after I buy these three books. A shipping and handling charge is
added to all shipments.

No-Risk Guarantee: If you are not satisfied--for any reason--you may return
the HANDBOOK OF ARTIFICIAL INTELLIGENCE within 10 days and your membership
will be canceled and you will owe nothing.

Name ←←←←←←←←
Name of Firm ←←←← (if you want subscription to your office)
Address ←←←←←←←←←←←←←
City ←←←←←←←←
State ←←←←←←← Zip ←←←←←←

(Offer good in Continental U.S. and Canada only. Prices slightly higher in
Canada.)

Scientific American 8/83    7-BV8

-Shaun ...uiucdcs!uicsl!keller

[I have been a member for several years, and have found this club's
service satisfactory (and improving).  The selection leans towards
data processing and networking, but there have been a fair number
of books on AI, graphics and vision, robotics, etc.  After buying
several books you get enough bonus points for a very substantial
discount on a selection of books that you passed up when they were
first offered.  I do get tired, though, of the monthly brochures that
use the phrase "For every computer professional, ..." in the blurb for
nearly every book.  If you aren't interested in the AI Handbook,
find a current club member for a list of other books you can get
when you enroll.  The current member will also get a book for signing
you up.  -- KIL]

------------------------------

Date: 31 Jan 84 19:55:24-PST (Tue)
From: pur-ee!uiucdcs!ccvaxa!lipp @ Ucb-Vax
Subject: Constraint Theory - (nf)
Article-I.D.: uiucdcs.5285


*********************BOOK ANNOUNCEMENT*******************************

                     CONSTRAINT THEORY
                 An Approach to Policy-Level
                         Modelling
                             by
                     Laurence D. Richards

The cybernetic concepts of variety, constraint, circularity, and
process provide the foundations for a theoretical framework for the
design of policy support systems.  The theoretical framework consists
of a modelling language and a modelling mathematics.  An approach to
building models for policy support sys- tems is detailed; two case
studies that demonstrate the approach are described.  The modelling
approach focuses on the structure of mental models and the subjec-
tivity of knowledge.  Consideration is given to ideas immanent in
second-order cybernetics, including paradox, self-reference, and
autonomy. Central themes of the book are "complexity", "negative
reasoning", and "robust" or "value-rich" policy.

424 pages; 23 tables; 56 illustrations
Hardback: ISBN 0-8191-3512-7 $28.75
Paperback:ISBN 0-8191-3513-5 $16.75

order from:
                          University Press of America
                                4720 Boston Way
                           Lanham, Maryland 20706 USA

------------------------------

Date: 28 Jan 84 0:25:20-PST (Sat)
From: pur-ee!uiucdcs!renner @ Ucb-Vax
Subject: Re: Expert systems for software debugging
Article-I.D.: uiucdcs.5217

Ehud Shapiro's error diagnosis system is not an expert system.  It doesn't
depend on a heuristic approach at all.  Shapiro tries to find the faulty part
of a bad program by executing part of the program, then asking an "oracle" to
decide if that part worked correctly.  I am very impressed with Shapiro's
work, but it doesn't have anything to do with "expert knowledge."

Scott Renner
{ihnp4,pur-ee}!uiucdcs!renner

------------------------------

Date: 28 Jan 84 12:25:56-PST (Sat)
From: ihnp4!houxm!hocda!hou3c!burl!clyde!akgua!sb1!sb6!bpa!burdvax!psuvax!bobgian @ Ucb-Vax
Subject: PSU's Netwide AI course
Article-I.D.: psuvax.432

The PSU ("in person") component of the course has started up, but things
are a bit slow and confused regarding the "netwide" component.

For one thing, I am too busy finishing a thesis and teaching full-time to
handle the administrative duties, and we don't (yet, at least) have the
resources to hire others to do it.

For another, my plans presupposed a level of intellectual maturity and
drive that is VERY rare in Penn State students.  I believe the BEST that
PSU can offer are in my course right now, but only 30 percent of them are
ready for what I wanted to do (and most of THEM are FACULTY!!).

I'm forced to backtrack and run a slightly more traditional "mini" course
to build a common foundation.  That course essentially will read STRUCTURE
AND INTERPRETATION OF COMPUTER PROGRAMS by Hal Abelson and Gerry Sussman.
[This book was developed for the freshman CS course (6.001) at MIT and will
be published in April.  It is now available as an MIT LCS tech report by
writing Abelson at 545 Technology Square, Cambridge, MA 02139.]

The "netwide" version of the course WILL continue in SOME (albeit perhaps
delayed) form.  My "mini" course should take about 6 weeks.  After that
the "AI and Mysticism" course can be restarted.

For now, I won't create net.ai.cse but rather will use net.ai for
occasional announcements.  I'll also keep addresses of all who wrote
expressing interest (and lack of a USENET connection).  Course
distributions will go (low volume) to that list and to net.ai until
things start to pick up.  When it becomes necessary we will "fork off"
into a net.ai subgroup.

So keep the faith, all you excited people!  This course is yet to be!!

        Bob

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: Fri 3 Feb 84 00:24:28-EST
From: STEELE%TARTAN@CMU-CS-C.ARPA
Subject: 1984 LISP Conference submissions deadline moved back

Because of delays that occurred in getting out the call for papers,
the deadline for submissions to the 1984 ACM Symposium on LISP and
Functional Programming (to be held August 5-8, 1984) has been moved
back from February 6 to February 15.  The date for notification of
acceptance or rejection of papers is now March 20 (was March 12).
The date for return of camera-ready copy is now May 20 (was May 15).

Please forward this message to anyone who may find it of interest.
--Thanks,
        Guy L. Steele Jr.
        Program Chairman, 1984 ACM S. on L. and F.P.
        Tartan Laboratories Incorporated
        477 Melwood Avenue
        Pittsburgh, Pennsylvania 15213
        (412)621-2210

------------------------------

Date: 31 Jan 84 19:54:56-PST (Tue)
From: pur-ee!uiucdcs!ccvaxa!lipp @ Ucb-Vax
Subject: Cybernetics Congress - (nf)
Article-I.D.: uiucdcs.5284

6th International Congress of the World Organisation
        of General Systems and Cybernetics
        10--14 September 1984
        Paris, France
This transdisciplinary congress will present the contemporary aspects
of cybernetics and of systems, and examine their different currents.
The proposed topics include both methods and domains of cybernetics
and systems:
  1) foundations, epistemology, analogy, modelisation, general methods
     of systems, history of cybernetics and systems science ideas.
  2) information, organisation, morphogenesis, self-reference, autonomy.
  3) dynamic systems, complex systems, fuzzy systems.
  4) physico-chemical systems.
  5) technical systems: automatics, simulation, robotics, artificial
     intelligence, learning.
  6) biological systems: ontogenesis, physiology, systemic therapy,
     neurocybernetics, ethology, ecology.
  7) human and social systems: economics, development, anthropology,
     management, education, planification.

For further information:
                                     WOGSC
                               Comite de lecture
                                     AFCET
                               156, Bld. Pereire
                             F 75017 Paris, France
Those who want to attend the congress are urged to register by writing
to AFCET, at the above address, as soon as possible.

------------------------------

End of AIList Digest
********************

∂05-Feb-84  0007	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #13
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Feb 84  00:07:15 PST
Date: Sat  4 Feb 1984 23:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #13
To: AIList@SRI-AI


AIList Digest             Sunday, 5 Feb 1984       Volume 2 : Issue 13

Today's Topics:
  Brain Theory - Parallelism,
  Seminars - Neural Networks & Automatic Programming
----------------------------------------------------------------------

Date: 31 Jan 84 09:15:02 EST  (Tue)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: parallel processing in the brain

       From: Rene Bach <BACH@SUMEX-AIM.ARPA>
       What are the evidences that the brain is a parallel processor?  My own
       introspection seem to indicate that mine is doing time-sharing.  That is
       I can follow only one idea at a time, but with a lot of switching
       between reasoning paths (often more non directed than controlled
       switching).

Does that mean you hold your breath and stop thinking while you're
walking, and stop walking in order to breathe or think?

More pointedly, I think it's incorrect to consider only
consciously-controlled processes when we talk about whether or not
the brain is doing parallel processing.  Perhaps the conscious part
of your mind can keep track of only one thing at a time, but most
(probably >90%) of the processing done by the brain is subconscious.

For example, most of us have to think a LOT about what we're doing
when we're first learning to drive.  But after a while, it becomes
largely automatic, and the conscious part of our mind is freed to
think about other things while we're driving.

As another example, have you ever had the experience of trying
unsuccessfully to remember something, and later remembering
whatever-it-was while you were thinking about something else?
SOME kind of processing was going on in the interim, or you
wouldn't have remembered whatever-it-was.

------------------------------

Date: 30 Jan 84 20:18:33-PST (Mon)
From: pur-ee!uiucdcs!parsec!ctvax!uokvax!andree @ Ucb-Vax
Subject: Re: intelligence and genius - (nf)
Article-I.D.: uiucdcs.5259

Sorry, js@psuvax, but I DO know something about what I spoke, even if I do
have trouble typing.

I am aware that theorom-proving machines are impossible. It's also fairly
obvious that they would use lots of time and space.

However, I didn't even MENTION them. I talked about two flavors of machine.
One generated well-formed strings, and the other said whether they were
true or not. I didn't say either machine proved them. My point was that the
second of these machines is also impossible, and is closely related to
Jerry's genius finding machines. [I assume that any statement containing
genius is true.]

        Down with replying without reading!
        <mike

------------------------------

Date: Wed, 1 Feb 84 13:54:21 PST
From: Richard Foy <foy@AEROSPACE>
Subject: Brain Processing

The Feb Scientific American has an article entitled "The
Skill of Typing" which can help one to form insights into
mechanisms of the brains processing.
richard

------------------------------

Date: Thu, 2 Feb 84 08:24:35 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: AIList Digest   V2 #10

Re: Parallel Processing in the Brain

  There are several instances of people experiencing what can most easily
be explained as "tasking" in the brain. (an essay by Henri Poincare in "The
World of Mathematics", "The Seamless Web" by Stanley Burnshaw)  It appears
that the conscious mind is rather clumsy at creative work and in large measure
assigns tasks (in parallel) to the subconscious mind which operates in the
background.  When the background task is finished, an interrupt is generated
and the conscious mind becomes aware of the solution without knowing how the
problem was solved.

  --Charlie

------------------------------

Date: Thu 2 Feb 84 10:17:08-PST
From: Kenji Sugiyama <SUGIYAMA@SRI-AI.ARPA>
Subject: Re: Parallel brain?

I had a strange experience when I had practiced abacus in Japan.
An abacus is used for adding, subtracting, multipling, and dividing
numbers.  The practice consisted of a set of calculations in a definite
amount of time, say, 15 minutes.  During that time, I began to think
of something other than the problem at hand.  Then I noticed that
fact ("Aha, I thought of this and that!"), and grinned at myself in
my mind.  In spite of these detours, I continued my calculations without
an interruption.  This kind of experience repeated several times.

It seems to me that my brain might be parallel, at least, in simple tasks.

------------------------------

Date: 2 Feb 1984 8:16-PST
From: fc%USC-CSE@ECLA.ECLnet
Subject: Re: AIList Digest   V2 #10

parallelism in the brain:
        Can you walk and chew gum at the same time?
                        Fred

------------------------------

Date: Sat, 4 Feb 84 15:06:09 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: The brain is parallel, yet data flow can be serial...

        In response to Rene Bach's question whether "the brain is a parallel
processor."  There is no other response other than an emphatic YES!  The
brain is comprised of about 10E9 neurons.  Each one of those neurons is
making locally autonomous calculations; it's hard to get more parallel than
that!  The lower brain functions (e.g., sensory preprocessing, lower motor
control, etc.) are highly distributed and locally autonomous processors (i.e.,
pure parallel data flow).  At the higher thought processing levels, however,
it has been shown (can't cite anything, but I can get sources if someone
wants me to dig them out) that logic tends to run in a serial fashion.
That is, the brain is parallel (a hardware structure), yet higher logic
processes apply the timing of thought in a serial nature (a "software"
structure).
        It is generally agreed that the brain is an associational
machine; it processes based upon the timing of diffuse stimuli and the
resulting changes in the "action potential" of its member neurons.
"Context" helps to define the strength and structure of those associational
links.  Higher thinking is generally a cognitive process where the context
of situations is manipulated.  Changing context (and some associational
links) will often result in a "conclusion" significantly different than
previously arrived upon.  Higher thought may be thought as a three process
cycle:  decision (evaluation of an associational network), reasonability
testing (i.e., is the present decision using a new "context" no different
from the decision arrived upon utilizing the previous "context"?), and
context alteration (i.e., "if my 'decision' is not 'reasonable' what
'contextual association' may be omitted or in error?").  This cycle is
continued until the second step -- 'reasonability testing' -- has concluded
that the result of this 'thinking' process is at least plausible.  Although the
implementation (assuming the trichotomy is correct) in the brain is
via parallel neural structures, the movement of information through those
structures is serial in nature.  An interesting note on the above trichotomy;
note what occurs when the input to the associational network is changed.
If the new input is not consistent with the previously existing 'context'
then the 'reasonability tester' will cause an automatic readjustment of
the 'context'.
        Needless to say, this is not a rigorously proven theory of mine,
but I feel it is quite plausible and that there are profuse psychophysical
and phychological studies that reinforce the above model.  As of now, I
use it as a general guiding light in my work with vision systems, but it
seems equally appplicable to general AI.

                        Philip Kahn
                        KAHN@UCLA-CS.ARPA

------------------------------

Date: 02/01/84 16:09:21
From: STORY at MIT-MC
Re:   Neural networks

                     [Forwarded by SASW@MIT-ML.]

DATE:   Friday, February 3, 1984
TITLE:  "NEURAL NETWORKS: A DISCUSSION OF VARIOUS MATHEMATICAL MODELS"
SPEAKER:        Margaret Lepley, MIT

Neural networks are of interest to researchers in artificial intelligence,
neurobiology, and even statistical mechanics.  Because of their random parallel
structure it is difficult to study the transient behavior of the networks.  We
will discuss various mathematical models for neural networks and show how the
behaviors of these models differ.  In particular we will investigate
asynchronous vs. synchronous models with undirected vs. directed edges of
various weights.

HOST:   Professor Silvio Micali

------------------------------

Date: 01 Feb 84  1832 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Feb 7th CSD Colloquium - Stanford

                  [Reprinted from the SU-SCORE bboard.]

                  A Perspective on Automatic Programming
                             David R. Barstow
                        Schlumberger-Doll Research
                    4:30pm, Terman Aud., Tues Feb 7th

Most work in automatic programming has focused primarily on the roles of
deduction and programming knowledge. However, the role played by knowledge
of the task domain seems to be at least as important, both for the usability
of an automatic programming system and for the feasibility of building one
which works on non-trivial problems. This perspective has evolved during
the course of a variety of studies over the last several years, including
detailed examination of existing software for a particular domain
(quantitaive interpretation of oil well logs) and the implementation
of an experimental automatic programming system for that domain. The
importance of domain knowledge has two importatnt implications: a primary goal
of automatic programming research should be to characterize the programming
process for specific domains; and a crucial issue to be addressed
in these characterizations is the interaction of domain and programming
knowledge during program synthesis.

------------------------------

End of AIList Digest
********************

∂11-Feb-84  0005	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #14
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  00:03:42 PST
Date: Fri 10 Feb 1984 22:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #14
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 14

Today's Topics:
  Requests - SHRDLU & Spencer-Brown & Programming Tests & UNITS,
  Replys - R1/XCON & AI Text & Lisp Machine Comparisons,
  Seminars - Symbolic Supercomputer & Expert Systems & Multiagent Planning
----------------------------------------------------------------------

Date: Sun, 29 Jan 84 16:30:36 PST
From: Rutenberg.pa@PARC-MAXC.ARPA
Reply-to: Rutenberg.pa@PARC-MAXC.ARPA
Subject: does anyone have SHRDLU?

I'm looking for a copy of SHRDLU, ideally in
machine readable form although a listing
would also be fine.

If you have a copy or know of somebody
who does, please send me a message!

Thanks,
        Mike

------------------------------

Date: Mon, 6 Feb 84 14:48:37 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Re: AIList Digest   V2 #12

I would dearly like to get in contact with G. Spencer-Brown.  Can anyone
give me any kind of lead?  I have tried his publisher, Bantam, and got
no results.

Thanks.

  --Charlie

------------------------------

Date: Wed,  8 Feb 84 19:26:38 CST
From: Stan Barber <sob@rice>
Subject: Testing Programming Aptitude or Compentence

I am interested in information on the following tests that have been or are
currently administered to determine Programming Aptitude or Compentence.

1. Aptitude Assessment  Battery:Programming (AABP) created by Jack M. Wolfe
and made available to employers only from Programming Specialists, Inc.
Brooklyn NY.

2. Programmer Aptitude/Compentence Test System sold by Haverly Systems,
Inc. (Introduced in 1970)

3. Computer Programmer Aptitude Battery by SRA (Science Research Associates),
Inc. (Examined in by F.L. Schmidt et.al. in Journal of Applied Psychology,
Volume 65 [1980] p 643-661)

4. CLEP Exam on Computers and Data Processing. The College Board and the
Educational Testing Service.

5. Graudate Record Exam Advanced Test in Computer Science by the Education
Testing Service.

Please send the answers to the following questions if you have taken or
had experience with any of these tests:

1. How many scores and what titles did they used for the version of the
exam that you took?

2. Did you feel the test actually measured your ability to learn to
program or your current programming competence (that is, did you feel it
asked relevant questions)?

3. What are your general impressions about testing and more specifically
about testing special abilities or skills (like programming, writing, etc.)

I will package up the results and send them to Human-nets.

My thanks.


                        Stan Barber
                        Department of Psychology
                        Rice University
                        Houston TX 77251

                        sob@rice                        (arapnet,csnet)
                        sob.rice@rand-relay             (broken arpa mailers)
                        ...!{parsec,lbl-csam}!rice!sob  (uucp)
                        (713) 660-9252                  (bulletin board)

------------------------------

Date: 6 Feb 84 8:10:41-PST (Mon)
From: decvax!linus!vaxine!chb @ Ucb-Vax
Subject: UNITS request: Second Posting
Article-I.D.: vaxine.182

Good morning!

   I am looking for a pointer to someone (or something) who is knowledgeable
about the features and the workings of the UNITS package, developed at
Stanford HPP.  If you know something, or someone, and could drop me a note
(through mail) I would greatly appreciate it.

   Thanks in advance.


                                Charlie Berg
                             ...allegra!linus!vaxine!chb

------------------------------

Date: 5 Feb 84 20:28:09-PST (Sun)
From: hplabs!hpda!fortune!amd70!decwrl!daemon @ Ucb-Vax
Subject: DEC's expert system for configuring VAXen
Article-I.D.: decwrl.5447

[This is in response to an unpublished request about R1. -- KIL]

Just for the record - we changed the name from "R1" to "XCON" about a year
ago I think.   It's a very useful system and is part of a family of expert
systems which assist us in the operation of various corporate divisions
(sales, service, manufacturing, installation).

Mark Palmer
Digital

        (UUCP)  {decvax, ucbvax, allegra}!decwrl!rhea!nacho!mpalmer

        (ARPA)  decwrl!rhea!nacho!mpalmer@Berkeley
                decwrl!rhea!nacho!mpalmer@SU-Shasta

------------------------------

Date: 6 Feb 84 7:15:33-PST (Mon)
From: harpo!utah-cs!hansen @ Ucb-Vax
Subject: Re: AI made easy??
Article-I.D.: utah-cs.2473

I'd try Artificial Intelligence by Elaine Rich (McGraw-Hill).  It's easy
reading, not too technical but gives a good overview to the novice.

Chuck Hansen {...!utah-cs}

------------------------------

Date: 5 Feb 84 8:48:26-PST (Sun)
From: hplabs!sdcrdcf!darrelj @ Ucb-Vax
Subject: Re: Lisp Machines
Article-I.D.: sdcrdcf.813

There really no such things as reasonable benchmarks for systems as different
as various Lisp machines and VAXen are.  Each machine has different strengths
and weaknesses.  Here is a rough ranking of machines:
VAX 780 running Fortran/C standalone
Dorado (5 to 10X dolphin)
LMI Lambda, Symbolics 3600, KL-10 Maclisp (2 to 3X dolphin)
Dolphin, dandelion, 780 VAX Interlisp, KL-10 Interlisp

Relative speeds are very rough, and dependent on application.

Notes:  Dandelion and Dolphin have 16-bit ALUs, as a result most arithmetic
is pretty slow (and things like trancendental functions are even worse
because there's no way to floating arithmetic without boxing each
intermediate result).  There is quite a wide range of I/O bandwidth among
these machines -- up to 530 Mbits/sec on a Dorado, 130M on a dolphin).

Strong points of various systems:
Xerox: a family of machines fully compatible at the core-image level,
spanning a wide range of price and performance (as low as $26k for a minumum
dandelion, to $150k for a heavily expanded Dorado).  Further, with the
exception of some of the networking and all the graphics, it is very highly
compatible with both Interlisp-10 and Interlisp-VAX (it's reasonable to have
a single set of sources with just a bit of conditional compilation).
Because of the use of a relatively old dialect, they have a large and well
debugged manual as well.

LMI and Symbolics (these are really fairly similar as both are licensed from
the MIT lisp machine work, and the principals are rival factions of the MIT
group that developed it) these have fairly large microcode stores, and as
a result more things are fast (e.g. much of graphics primitives are
microcoded, so these are probably the machines for moby amounts of image
processing and graphics.  There are also tools for compiling directly to
microcode for extra speed.  These machines also contain a secondary bus such
as Unibus or Multibus, so there is considerable flexibility in attaching
exotic hardware.

Weak points:  Xerox machines have a proprietary bus, so there are very few
options (philosphy is hook it to something else on the Ethernet).  MIT
machines speak a new dialect of lisp with only partial compatible with
MACLISP (though this did allow adding many nice features), and their cost is
too high to give everyone a machine.

The news item to which this is a response also asked about color displays.
Dolphin:  480x640x4 bits.  The 4 bits go thru a color map to 24 bits.
Dorado:  480x640x(4 or 8 or 24 bits).  The 4 or 8 bits go thru a color map to
         24 bits.  Lisp software does not currently support the 24 bit mode.
3600:  they have one or two (the LM-2 had 512x512x?) around 1Kx1Kx(8 or 16
or 24) with a color map to 30 bits.
Dandelion:  probably too little I/O bandwidth
Lambda:  current brochure makes passing mention of optional standard and
         high-res color displays.

Disclaimer:  I probably have some bias toward Xerox, as SDC has several of
their machines (in part because we already had an application in Interlisp.

Darrel J. Van Buer, PhD
System Development Corp.
2500 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,sdccsu3,trw-unix}!sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

------------------------------

Date: 6 Feb 84 16:40 PDT
From: Kandt.pasa@PARC-MAXC.ARPA
Subject: Lisp Machines

I have seen several benchmarks as a former Symbolics and current Xerox
employee.  These benchmarks have typically compared the LM-2 with the
1100; they have even included actual or estimated(?) 3600, 1108, or 1132
performances.  These benchmarks, however, have seldom been very
informative because the actual test code is not provided or a detailed
discussion of the implementation.  For example, is the test on the
Symbolics machine coded in Zetalisp or with the Interlisp compatibility
package?  Or, in Interlisp, were fast functions used (FRPLACA vs.
RPLACA)?  (Zetalisp's RPLACA is equivalent to Interlisp's FRPLACA so
that if this transformation was not performed the benchmark would favor
the Symbolics machine.)  What about efficiency issues such as block
compiling, compiler optimizers, or explicitily declaring variables?
There are also many other issues such as what happens when the data set
gets very large in a real application instead of a toy benchmark or, in
Zetalisp, should you turn the garbage collector on (its not normally on)
and when you do what impact does it have on performance.  In summary, be
cautious about claims without thorough supportive evidence.  Also
realize that each machine has its own strengths and weaknesses; there is
no definitive answer.  Caveat emptor!

------------------------------

Date: Sat, 4 Feb 84 19:24 EST
From: Thomas Knight <tk@MIT-MC.ARPA>
Subject: Concurrent Symbolic Supercomputer

                      [Forwarded by SASW@MIT-MC]


                                FAIM-1

                       Fairchild AI Machine #1

              An Ultra-Concurrent Symbolic Supercomputer

                                  by


                           Dr. A. L. Davis
      Fairchild Laboratory for Artificial Intelligence Research

                       Friday, February 10, 1984


Presently AI researchers are being hampered in the development of large scale
symbolic applications such as expert systems, by the lack of sufficient machine
horsepower to execute the application programs at a sufficiently rapid rate to
make the application viable.  The intent of the FAIM-1 machine is to provide
a machine capable of 3 or 4 orders of magnitude performance improvement over
that currently available on today's large main-frame machines.  The
main source of performance increase is in the exploitation of concurrency at
the program, system, and architectural levels.

In addition to the normal ancillary support activities, the work is being
carried on in 3 areas:

        1.  Language Design - a frame based, object oriented language is being
            designed which allows the programmer to express highly concurrent
            symbolic algorithms.  The mechanism permits both logical and
            procedural programming styles in a unified message based semantics
            fashion.  In addition, the programmer may provide strategic
            information which aids the system in managing the concurrency
            structure on the physical resource components of the machine.

        2.  Machine Architecture - the machine derives its power from the
            homogeneous replication of a medium grain processor element.
            The element consists of a processor, message delivery subsystem,
            and a parallel pattern based memory subsystem known as the CxAM
            (Context Adressable Memory).  2 variants of a CxAM design are
            being done at this time and are targeted for fabrication on a
            sub 2 micron CMOS line.  The connection topology for the
            replicated elements is a 3 axis, single twist, Hex plane which
            has the advantages of planar wiring, easy extensibility, variable
            off surface bandwidth, and permits a variety of fault tolerant
            designs.  The Hex plane topology also permits nice hierarchical
            process growth without creating excess communication congestion
            which would cause false synchronization in otherwise concurrent
            activities.  In addition the machine is being designed in hopes
            of an eventual wafer-scale integrated implementation.

        3.  Resource Allocation - with any concurrent system which does not
            require machine dependent programming styles, there is a generic
            problem in mapping the concurrent activities extant in the program
            efficiently onto the multi-resource ensemble.  The strategy
            employed in the FAIM-1 system is to analyze the static structure of
            the source program, transform it into a graph, and then via a
            series of function preserving graph transforms produce a loadable
            version of the program which attempts to minimize communication
            cost while preserving the inherent concurrency structure.
            A certain level of dynamic compensation is guided by programmer
            supplied strategy information.

The talk will present an overview of the work we have done in these areas.

Host: Prof. Thomas Knight

------------------------------

Date: 8 Feb 84 15:59:49 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: III Seminar on Expert Systems this coming Tuesday...

                    [Reprinted from the Rutgers bboard.]

                                 I I I SEMINAR


          Title:    Automation of Modeling, Simulation and Experimental
                    Design - An Expert System in Enzyme Kinetics

          Speaker:  Von-Wun Soo

          Date:     Tuesday, February 14,1983, 1:30-2:30 PM

          Location: Hill Center, Seventh floor lounge


  Von-Wun Soo, a Ph.D. student in our department, will give an informal talk on
the thesis research he is proposing.  This is his abstract:

       We  are proposing to develop a general knowledge engineering tool to
    aid biomedical researchers in developing biological models and  running
    simulation experiments. Without such powerful tools, these tasks can be
    tedious  and  costly.  Our aim is to integrate these techniques used in
    modeling, simulation, optimization, and experimental design by using an
    expert system approach. In addition we propose to carry out experiments
    on the processes of theory formation used by the scientists.

    Enzyme kinetics is the domain where we are concentrating  our  efforts.
    However, our research goal is not restricted to this particular domain.
    We  will attempt to demonstrate with this special case, how several new
    ideas  in  expert  problem  solving  including  automation  of   theory
    formation,  scientific  discovery,  experimental  design, and knowledge
    acquisition can be further developed.

    Four modules have been designed in parallel:  PROKINAL, EPX, CED, DISC.

    PROKINAL is a model generator which simulates the qualitative reasoning
    of the kineticists who conceptualize and postulate a reaction mechanism
    for a set of experimental data. By using a general procedure  known  as
    the  King-Altman  procedure to convert a mechanism topology into a rate
    law function, and  symbolic  manipulation  techniques  to  factor  rate
    constant   terms   to   kinetic   constant   term,  PROKINAL  yields  a
    corresponding FORTRAN function which computes the reaction rate.

    EPX is a model simulation aid which is designed by combining EXPERT and
    PENNZYME. It is supposed to guide the novice user in  using  simulation
    tools  and  interpreting  the  results.  It  will take the data and the
    candidate model that has been generated from PROKINAL and estimate  the
    parameters by a nonlinear least square fit.

    CED  is a experimental design consultant which uses EXPERT to guide the
    computation of experimental conditions.  Knowledge  of  optimal  design
    from  the  statistical  analysis  has  been taken into consideration by
    EXPERT in order to give advice  on  the  appropriate  measurements  and
    reduce the cost of experimentation.

    DISC  is  a  discovery  module which is now at the stage of theoretical
    development. We wish to explore and simulate the behavior of scientific
    discovery in enzyme kinetics research and use the results in automating
    theory formation tasks.

------------------------------

Date: 09 Feb 84  2146 PST
From: Rod Brooks <ROD@SU-AI>
Subject: CSD Colloquium

                [Reprinted from the Stanford bboard.]

CSD Colloquium
Tuesday 14th, 4:30pm Terman Aud
Michael P. Georgeff, SRI International
"Synthesizing Plans for Co-operating Agents"

Intelligent agents need to be able to plan their activities so that
they can assist one another with some tasks and avoid harmful
interactions on others.  In most cases, this is best achieved by
communication between agents at execution time. This talk will discuss
a method for synthesizing a synchronized multi-agent plan to achieve
such cooperation between agents.  The idea is first to form
independent plans for each individual agent, and then to insert
communication acts into these plans to synchronize the activities of
the agents.  Conditions for freedom from interference and cooperative
behaviour are established.  An efficient method of interaction and
safety analysis is then developed and used to identify critical
regions and points of synchronization in the plans.  Finally,
communication primitives are inserted into the plans and a supervisor
process created to handle synchronization.

------------------------------

End of AIList Digest
********************

∂11-Feb-84  0121	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #15
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  01:21:08 PST
Date: Fri 10 Feb 1984 22:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #15
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 15

Today's Topics:
  Proofs - Fermat's Theorem & 4-Color Theorem,
  Brain Theory - Parallelism
----------------------------------------------------------------------

Date: 04 Feb 84  0927 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Fermat and decidability

From the logical point of view, Fermat's last theorem is a Pi-1
statement. It follows that it is decidable. Whether it is valid
or not is another matter.

------------------------------

Date: Sat 4 Feb 84 13:13:14-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: Spencer-Brown's Proof

I don't know anything about the current status of the computer proof of the
4-colour theorem, though the last I heard (five years ago) was that it was
"probably OK".   That's why I use the word "theorem".   However, I can shed
some light on Spencer-Brown's alleged proof -- I was present at a lecture in
Cambridge where he supposedly gave the outline of the proof, and  I applauded
politely, but was later fairly authoritatively informed that it disintegrated
under closer scrutiny.   This doesn't *necessarily* mean that the man is a
total flake, since other such proofs by highly reputable mathematicians have
done the same (we are told that one proof was believed for twelve whole years,
late in the 19th century, before its flaw was discovered).
                                                                - Richard

------------------------------

Date: Mon, 6 Feb 84 14:46:43 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Scientific Method

Isn't it interesting that most of what we think about proofs is belief!
I guess until one actually retraces the steps of a proof and their
justifications one can only express his belief in its truth or falsness.

  --Charlie

------------------------------

Date: 3 Feb 84 8:48:01-PST (Fri)
From: harpo!eagle!allegra!alan @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: allegra.2254

I've been reading things like:

        My own introspection seem to indicate that ...
        I find, upon introspection, that ...
        I find that most of what my brain does is ...
        I also feel like ...
        I agree that based on my own observations, my brain appears to
          be ...

Is this what passes for scientific method in AI these days?

        Alan S. Driscoll
        AT&T Bell Laboratories

------------------------------

Date: 2 Feb 84 14:40:23-PST (Thu)
From: decvax!genrad!grkermit!masscomp!clyde!floyd!cmcl2!rocky2!cucard!
      aecom!alex @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: aecom.358

        If the brain was a serial processor, the limiting processing speed
would be the speed that neurons conduct signals. Humans, however, do
very complex processing in real time! The other possibility is that the
data structures of the brain are HIGHLY optimized.


                                Alex S. Fuss
                        {philabs, esquire, cucard}!aecom!alex

------------------------------

Date: Tue, 7 Feb 84 13:09:25 PST
From: Adolfo Di-Mare <v.dimare@UCLA-LOCUS>
Subject: I can think in parale||,

but most of time I'm ---sequential. For example, a lot of * I can
talk with (:-{) and at the same time I can be thinking on s.m.t.i.g
else. I also do this when ai-list gets too boring: I keep browsing
until I find something intere sting, and then I do read, with a better
level of under-standing. In the u-time, I can daydream...

However, If I really want to get s.m.t.i.g done, then I cannot think
on anything else! In this cases, I just have one main-stream idea in
my mind. When I'm looking for a solution, I seldom use depth first,
or bread first search. Most of the time I use a convynatium of all
these tricks I know to search, until one 'works'.

To + up, I think we @|-< can do lots of things in lots of ways. And
until we furnish computers with all this tools, they won't be able to
be as intelligent as us. Just parale|| is not the ?↑-1.

        Adolfo
              ///

------------------------------

Date: 7 Feb 1984 1433-PST
From: EISELT%UCI-20A@Rand-Relay
Subject: More on Philip Kahn's reply to Rene Bach

I recently asked Philip Kahn (via personal net mail) to elaborate on his three
cycle model of thought, which he described briefly in his reply to Rene Bach's
question.  Here is my request, and his reply:

                      -------------------------

  In your recent submission to AIList, you describe a three-process cycle
model of higher-level brain function.  Your model has some similarities to
a model of text understanding we are working on here at UC Irvine.  You say,
though, that there are "profuse psychophysical and psychological studies that
reinforce the ... model."  I haven't seen any of these studies and would
be very interested in reading them.  Could you possibly send me references
to these studies?  Thank you very much.

Kurt Eiselt
eiselt@uci-20a


                       ------------------------

Kurt,

        I said "profuse" because I have come across many psychological
and physiological studies that have reinforced my belief.  Unfortunately,
I have very few specific references on this, but I'll tell you as much as
I can....

        I claim there are three stages: associational, reasonability, and
context.  I'll tell you what I've found to support each.  Associational
nets, also called "computational" or "parameter" nets, have been getting
a lot of attention lately.  Especially interesting are the papers coming out
of Rochester (in New York state).  I suggest the paper by Feldman called
"Parameter Nets."  Also, McCullough in "Embodiments of Mind" introduced a
logical calculus that he proposes neural mechanisms use to form assocational
networks.  Since then, a considerable amount of work has been done on
logical calculus, and these works are directly applicable to the analysis
of associational networks.  One definitive "associational network" found
in nature that has been exhaustively defined by Ratliff is the lateral
inhibition that occurs in the linear image sensor of the Limulus crab.
Each element of the network inhibits its neighbors based upon its value,
and the result is the second spatial derivative of the image brightness.
Most of the works you will find to support associational nets are directly
culled from neurophysiological studies.  Yet, classical conditioning
psychology defines the effects of association in its studies on forward and
backward conditioning.  Personally, I feel the biological proof of
associational nets is more concrete.
        The support for a "reasonability" level of processing has more
psychological support, because it is generally a cognitive process.
For example, learning is facilitated by subject matter that is most
consistent with past knowledge; that is, knowledge is most facilitated by
a subject that is most "reasonable" in light of past knowledge.
Some studies have shown, though I can't cite them, that the less
"reasonable" a learning task, the lesser is the learned performance.
I remember having seen at least a paper (I believe it was by a natural
language processing researcher) that claimed that the facility of language
is a metaphorical process.  By definition, a metaphor is the comparison
of alike traits in dissimilar things; it seems to me this is a very good
way to look at the question of reasonability.  Again, though, no specific
references.  In neurophysiology there are found "feedback loops" that
may be considered "reasonability" testers in so far that they take action
only when certain conditions are not met.  You might want to look at work
done on the cerebellum to document this.
        "Context" has been getting a lot of attention lately.  Again,
psychology is the major source of supporting evidence, yet neurophysiology
has its examples also.  Hormones are a prime example of "contextual"
determinants.  Their presence or absence affects the processing that
occurs in the neurons that are exposed to them.  But on a more AI level,
the importance of context has been repeatedly demonstrated by psychologists.
I believe that context is a learned phenomena.  Children have no construct
of context, and thus, they are often able to make conclusions that may be
associationally feasible, yet clearly contrary to the context of presentation.
Context in developmental psychology has been approached from a more
motivational point of view.  Maslowe's hierarchies and the extensive work
into "values" are all defining different levels of context.  Whereas an
associational network may (at least in my book) involve excitatory
nodal influences, context involves inhibitory control over the nodes in
the associational network.  In my view, associational networks only know
(always associated), (often associated), and (weak association).
(Never associated) dictates that no association exists by default.  A
contextual network knows only that the following states can occur between
concepts: (never can occur) and (rarely occurs).  These can be defined using
logical calculus and learning theory.  The associational links are solely
determined by event pairing and is a more dynamic event.  Contextual
networks are more stable and can be the result of learning as well as
by introspective analysis of the associational links.
        As you can see, I have few specific references on "context," and rely
upon my own theory of context.  I hope I've been of some help, and I would
like to be kept apprised of your work.  I suggest that if you want research
evidence of some of the above, that you examine indices on the subjects I
mentioned.  Again,

                Good luck,
                Philip Kahn

------------------------------

Date: 6 Feb 84 7:18:25-PST (Mon)
From: harpo!ulysses!mhuxl!eagle!hou5h!hou5a!hou5d!mat @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: hou5d.809

See the Feb. Scientific American for an article on typists and speed.  There
is indeed evidence for a high degree of parallelism even in SIMILAR tasks.

                                                Mark Terribile

------------------------------

Date: Wed,  8 Feb 84 18:19:09 CST
From: Doug Monk <bro@rice>
Subject: Re: AIList Digest   V2 #11

Subject : Mike Brzustowicz's 'tip of the tongue' as parallel process

Rather than being an example of parallel processing, the 'tip of the
tongue' phenomenon is probably more an example of context switch, where
the attempt to recall the information displaces it temporarily, due to
too much pressure being brought to bear. ( Perhaps a form of performance
anxiety ? ) Later, when the pressure is off, and the processor has a spare
moment, a smaller recall routine can be used without displacing the
information. This model assumes that concentrating on the problem causes
more of the physical brain to be involved in the effort, thus perhaps
'overlaying' the data desired. Once a smaller recall routine is used,
the recall can actually be performed.

        Doug Monk       ( bro.rice@RAND-RELAY )

------------------------------

Date: 6 Feb 84 19:58:33-PST (Mon)
From: ihnp4!ihopa!dap @ Ucb-Vax
Subject: Re: parallel processing in the brain
Article-I.D.: ihopa.153

If you consider pattern recognition in humans when constrained to strictly
sequential processing, I think we are MUCH slower than computers.

In other words, how long do you think it would take a person to recognize
a letter if he could only inquire as to the grayness levels in different
pixels?  Of course, he would not be allowed to "fill in" a grid and then
recognize the letter on the grid.  Only a strictly algorithmic process
would be allowed.

The difference here, as I see it, is that the human mind DOES work in
parallel.  If we were forced to think sequentially about each pixel in our
field of vision, we would become hopelessly bogged down.  It seems to me
that the most likely way to simulate such a process is to have a HUGE
number of VERY dumb processors in a heirarchy of "meshes" such that some
small number of processors in common localities in a low level mesh would
report their findings to a single processor in the next higher level mesh.
This processor would do some very quick, very simple calculations and pass
its findings on the the next higher level mesh.  At the top level, the
accumulated information would serve to recognize the pattern.  I'm really
speaking off the top of my head since I'm no AI expert.  Does anybody know if
such a thing exists or am I way off?

Darrell Plank
BTL-IH
ihopa!dap

[Researchers at the University of Maryland and at the University of
Massachusetts, among others, have done considerable work on "pyramid"
and "processing cone" vision models.  The multilayer approach was
also common in perceptron-based pattern recognition, although very
little could be proven about multilayer networks.  -- KIL]

------------------------------

End of AIList Digest
********************

∂11-Feb-84  0215	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #16
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  02:14:33 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Fri 10 Feb 1984 23:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #16
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 16

Today's Topics:
  Lab Description - New UCLA AI Lab,
  Report - National Computing Environment for Academic Research,
  AI Journal - New Developments in the Assoc. for Computational Linguistics,
  Course - Organization Design,
  Conference - Natural Language and Logic Programming & Systems Science
----------------------------------------------------------------------

Date: Fri, 3 Feb 84 22:57:55 PST
From: Michael Dyer <dyer@UCLA-CS>
Subject: New UCLA AI Lab

              Announcing the creation of a new Lab for
              Artificial Intelligence Research at UCLA.


Just recently, the UCLA CS  department  received  a  private  foundation
grant  of  $450,000  with  $250,000  matching  funds  from the School of
Engineering and Applied Sciences to create a Laboratory  for  Artificial
Intelligence  Research.   The  departmental chairman as well as the dean
strongly support this effort and are both committed to the growth of  AI
at UCLA.

In  addition, UCLA has been chosen as the site of the next International
Joint Conference on Artificial Intelligence (IJCAI-85) in August, 1985.

UCLA is second in the nation among public research universities  and  in
the  top  six overall in quality of faculty, according to a new national
survey of 5,000 faculty and 228  universities.   In  a  two  year  study
(conducted  by the Conference Board of the Associated Research Councils,
consisting of the American Council of Learned  Societies,  the  American
Council  on  Education,  the  National  Research  Council and the Social
Science Research Council) the UCLA  Computer  Science  Dept.   tied  for
sixth place with U.  of Ill., after Stanford, MIT, CMU, UC Berkeley, and
Cornell.

The UCLA CS department is the recipient (in  1982)  of  a  $3.6  million
five-year  NSF  Coordinated  Experimental Research grant, augmented by a
$1.5 million award from DARPA.

Right  now  the  AI lab includes a dozen Apollo DN300 workstations on an
Apollo Domain ring network.  This ring is attached via an ethernet  gate
to the CS department LOCUS network of 20 Vax 750s and a 780.  UCLA is on
the Arpanet and CSNet.  Languages include Prolog and T  (a  Scheme-based
dialect of lisp).  A number of DN320s, DN460s and a color Apollo (DN660)
are on order and will be  housed  in  a  new  area  being  reserved  for
graduate  AI research.  One Vax 750 on the LOCUS net and 10 Apollos will
be  reserved for graduate AI instruction.  Robotics and vision equipment
is also being acquired.  The  CS  dept  is  seeking  an  assist.   prof.
(tenure  track) in the area of AI, with preference for vision, robotics,
problem-solving, expert systems, learning, and simulation  of  cognitive
processes.  The new AI faculty member will be able to direct expenditure
of a portion of available funds.  (Interested AI PhDs, reply to  Michael
Dyer, CS dept, UCLA, Los Angeles, CA 90024.  Arpanet:  dyer@ucla-cs).

Our AI effort is new, but growing, and includes the following faculty:

     Michael Dyer: natural language processing, cognitive modeling.
     Margot Flowers: reasoning, argumentation, belief systems.
     Judea Pearl: theory of heuristics, search, expert systems.
     Alan Klinger: signal processing, pattern recognition, vision.
     Michel Melkanoff: CAD/CAM, robotics.
     Stott Parker: logic programming, databases.

------------------------------

Date: 26 Jan 84 14:22:30-EDT (Thu)
From: Kent Curtis <curtis%nsf-cs@CSNet-Relay>
Subject: A National Computing Environment for Academic Research

The National Science Foundation has released a report entitled "A National
Computing Environment for Academic Research" prepared by an NSF Working Group
on Computers for Research, Kent Curtis, Chairman. The table of contents is:

Executive Summary

I. The Role of Modern Computing in Scientific and Engineering Research
        with Special Concern for Large Scale Computation

        Background

        A. Summary of Current Uses and Support of Large Scale Computing for
           Research

        B. Critique of Current Facilities and Support Programs

        C. Unfilled Needs for Computer Support of Research

II. The Role and Responsibilities of NSF with Respect to Modern Scientific
    Computing

III. A Plan of Action for the NSF: Recommendations

IV. A Plan of Action for the NSF: Funding Implications

Bibliography

Appendix
        Large-scale Computing Facilities

If you are interested in receiving a copy of this report contact
Kent Curtis, (202) 357-9747; curtis.nsf-cs@csnet-relay;
or write Kent K. Curtis
         Div. of Computer Research
         NSF
         Washington, D.C.  20550

------------------------------

Date: 10 Feb 84 09:35:51 EST (Fri)
From: Journal Duties  <acl@Rochester.ARPA>
Subject: ~New Developments in the Assoc. for Computational Linguistics


The AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS -- Some New Developments

    The AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS is the major
international journal devoted entirely to computational approaches to
natural language research.  With the 1984 volume, its name is being changed
to COMPUTATIONAL LINGUISTICS to reflect its growing international coverage.
There is now a European chapter of the ASSOCIATION FOR COMPUTATIONAL
LINGUISTICS and a growing interest in forming one in Asia.

The journal also has many new people on its Editorial Staff.  James Allen,
of the University of Rochester, has taken over as Editor.  The FINITE STRING
Editor is now Ralph Weischedel of the University of Delaware.  Lyn Bates of
Bolt Beranek and Newman is the Book Review Editor.  Michael McCord, now at
IBM, remains as Associate Editor.

With these major changes in editorial staffing, the journal has fallen
behind schedule.  In order to catch up this year, we will be publishing
close to double the regular number of issues.  The first issue for 1983,
which was just mailed out, contains papers on "Paraphrasing Questions Using
Given and New Information" by Kathleen McKeown and "Denotational Semantics
for 'Natural' Language Question-Answering Programs" by Michael Main and
David Benson.  There is a lengthy review of Winograd's new book by Sergei
Nirenburg and a comprehensive description of the new Center for the Study
of Language and Information at Stanford University.

Highlights of the forthcoming 1983 AJCL issues:

   - Volume 9, No. 2 (expected March '84) will contain, in addition
to papers on "Natural Language Access to Databases: Interpreting Update
Requests" by Jim Davidson and Jerry Kaplan and "Treating Coordination
in Logic Grammars" by Veronica Dahl and Michael McCord, will be accompanied
by a supplement: a Directory of Graduate Programs in Computational Linguistics.
The directory is the result of two years of surveys, and provides a fairly
complete listing of programs available internationally.

   - Volume 9, Nos. 3 and 4 (expected June '84) will be a special double
issue on Ill-Formed Input.  The issue will cover many aspects of processing
ill-formed sentences from syntactic ungrammaticality to dealing with inaccurate
reference.  It will contain papers from many of the research groups that
are working on such problems.

    We will begin publishing Volume 10 later in the summer.  In addition
to the regular contributions, we are planning a special issue on the
mathematical properties of grammatical formalisms.  Ray Perrault (now at
SRI) will be guest editor for the issue, which will contain papers addressing
most of the recent developments in grammatical formalisms (e.g., GPSG,
Lexical-Function Grammars, etc).  Also in the planning stage is a special
issue on Machine Translation that Jonathan Slocum is guest editing.

    With its increased publication activity in 1984, COMPUTATIONAL
LINGUISTICS can provide authors with an unusual opportunity to have their
results published in the international community with very little delay.
A paper submitted now (early spring '84) could actually be in print by the
end of the year, provided that major revisions need not be made.  Five
copies of submissions should be sent to:

                 James Allen, CL Editor
                 Dept. of Computer Science
                 The University of Rochester
                 Rochester, NY 14627, USA

    Subscriptions to COMPUTATIONAL LINGUISTICS come with membership in the
ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, which still is only $15 per year.
As a special bonus to new members, those who join the ACL for 1984 before
August will receive the special issue on Ill-Formed Input, even though it is
formally part of the volume for 1983.

To become a member, simply send your name, address and a check made out to
the Association for Computational Linguistics to:

                  Don Walker, ACL membership
                  SRI International
                  333 Ravenswood Avenue
                  Menlo Park, CA 94025, USA

People in Europe or with Swiss accounts can pay an equivalent value in Swiss
francs, by personal check in their own currency, or by a banker's draft that
credits account number 141.880.LAV at the Union Bank of Switzerland, 8 rue
de Rhone, CH-1211 Geneva 11, SWITZERLAND; send the statement with payment or
with a copy of the bank draft to:

                  Mike Rosner, ACL
                  ISSCO
                  54, route des Acacias
                  CH-1227 Geneva, SWITZERLAND

------------------------------

Date: Wednesday, 8 February 1984, 14:28-EST
From: Gerald R. Barber <JERRYB at MIT-OZ>
Subject: Course Announcement: Organization Design

                     [Forwarded by SASW@MIT-MC.]

The following is an announcement for a course that Tom Malone and I are
organizing for this spring term.  Anyone who is interested can come to
the course or contact:

        Tom Malone
        Malone@XX
        E53-307, x6843,
        or
        Jerry Barber
        Jerryb@OZ
        NE43-809, x5871



                          Course Announcement
                       15.963 Oranization Design

                  Wednesdays, 2:30 - 5:30 p.m, E51-016
                          Prof. Thomas Malone

In this graduate seminar we will review research from a number of
fields, identifying general principles of organization design that apply
to many kinds of information processing systems, including human
organizations and computer systems.  This novel approach will integrate
examples and theories from computer science, artificial intelligence,
organization theory and economics.  The seminar will also include
discussion of several special issues that arise when these general
principles are applied to designing organizations that include both
people and computers.

A partial list of topics includes:

I.  Introduction
        A. What is an organization?
                Scott, March & Simon, Etzioni, etc
        B. What is design?
                Simon: Science of Design

II. Alternative Organizational Designs
        A. Markets
                Computer Systems: Contract Nets, Enterprise
                Organizational Theories: Simon, Arrow, Hurwicz
        B.  Hierachies
                Computer Systems: Structured programming, inheritance
                  hierarchies
                Organizational Theories: Simon, March, Cyert, Galbraith,
                  Williamson
        C. Cooperating experts (or teams)
                Computer Systems: Hearsay, Ether, Actors, Smalltalk, Omega
                Organizational Theories: Marschak & Radner, Minsky & Papert

III. Integrating Computer Systems and Human Organizations
        A. Techniques for analyzing organizational needs
                Office Analysis Methodology, Critical Success Factors,
                Information Control Networks, Sociotechnical systems
        B. Possible technologies for supporting organizational problem-solving
                Computer conferencing, Knowledge-based systems

------------------------------

Date: Thu 2 Feb 84 20:35:47-PST
From: Pereira@SRI-AI
Subject: Natural Language and Logic Programming


                           Call for Papers

                      International Workshop On
                    Natural Lanugage Understanding
                        and Logic Programming

                Rennes, France - September 18-20, 1984

The workshop will consider fundamental principles and important
innovations in the design, definition, uses and extensions of logic
programming for natural language understanding and, conversely, the
adequacy of logic programming to express natural language grammar
formalisms. The topics of interest are:

* Formal representations of natural language
* Logic grammar formalisms
* Linguistic aspects (anaphora, coordination,...)
* Analysis methods
* Natural language generation
* Uses of techniques for logic grammars (unification)
  in other grammar formalisms
* Compilers and interpreters for grammar formalisms
* Text comprehension
* Applications: natural-language front ends (database
  interrogation, dialogues with expert systems...)

Conference Chairperson

Veronica Dahl  Simon Fraser University,
               Burnaby B.C. V5A 1S6
               Canada

Program Committee

H. Abrahamson (UBC, Canada)        F. Pereira (SRI, USA)
A. Colmerauer (GIA, France)        L. Pereira (UNL, Portugal)
V. Dahl (Simon Fraser U., Canada)  P. Sabatier (CNRS, France)
P. Deransart (INRIA, France)       P. Saint-Dizier (IRISA, France)
M. Gross (LADL, France)            C. Sedogbo (Bull, France)
M. McCord (IBM, USA)

Sponsored by: IRISA, Groupe BULL, INRIA

Deadlines:

        April 15:       Submission of papers in final form
        June 10:        Notification of acceptance to authors
        July 10:        Registration in the Workshop

Submission of papers:

Papers should contain the following items: abstract and title of
paper, author name, country, affiliation, mailing address and
phone (or telex) number, one program area and the following
signed statement: ``The paper will be presented at the Workshop
by one of the authors''.

Summaries should explain what is new or interesting abount
the work and what has been accomplished. Papers must report
recent and not yet published work.

Please send 7 copies of a 5 to 10 page single spaced manuscript,
including a 150 to 200 word abstract to:

-- Patrick Saint-Dizier
   Local Organizing Committee
   IRISA - Campus de Beaulieu
   F-35042 Rennes CEDEX - France
   Tel: (99)362000 Telex: 950473 F

------------------------------

Date: Sat, 4 Feb 84 10:18 cst
From: Bruce Shriver <ShriverBD.usl@Rand-Relay>
Subject: call for papers announcement

                              Eighteenth Annual
                       HAWAII INTERNATIONAL CONFERENCE
                                      ON
                               SYSTEM SCIENCES
                     JANUARY 2-4, 1985 / HONOLULU, HAWAII

This is the eighteenth in a series  of  conferences  devoted  to  advances  in
information  and  system sciences.  The conference will encompass developments
in theory or practice in the areas of  COMPUTER  HARDWARE  and  SOFTWARE,  and
advanced  computer  systems  applications in selected areas.  Special emphasis
will be devoted to MEDICAL  INFORMATION  PROCESSING,  computer-based  DECISION
SUPPORT SYSTEMS for upper-level managers in organizations, and KNOWLEDGE-BASED
SYSTEMS.

                               CALL FOR PAPERS

Papers are invited in the preceeding and related areas and may be theoretical,
conceptual,  tutorial  or descriptive in nature.  The papers submitted will be
refereed and those selected for conference presentation will be printed in the
CONFERENCE PROCEEDINGS; therefore, papers submitted for presentation must  not
have  been  previously presented or published.  Authors of selected papers are
expected to attend the conference to  present  and  discuss  the  papers  with
attendees.

Relevant topics include:
                                  Deadlines
HARDWARE                          * Abstracts may be submitted to track
* Distributed Processing            chairpersons for guidance and indication
* Mini-Micro Systems                of appropriate content by MAY 1, 1984.
* Interactive Systems               (Abstract is required for Medical
* Personal Computing                Information Processing Track.)
* Data Communication              * Full papers must be mailed to appropriate
* Graphics                          track chairperson by JULY 6, 1984.
* User-Interface Technologies     * Notification of Accepted papers will be
                                    mailed to the author on or before
SOFTWARE                            SEPTEMBER 7, 1984.
* Software Design Tools &         * Final papers in camera-ready form will
  Techniques                        be due by OCTOBER 19, 1984.
* Specification Techniques
* Testing and Validation
* Performance Measurement &       Instructions for Submitting Papers
  Modeling                        1. Submit three copies of the full paper,
* Formal Verification                not to exceed 20 double-spaced pages,
* Management of Software             including diagrams, directly to the
  Development                        appropriate track chairperson listed
                                     below, or if in doubt, to the conference
APPLICATIONS                         co-chairpersons.
* Medical Information             2. Each paper should have a title page
  Processing Systems                 which includes the title of the paper,
* Computer-Based Decision            full name of its author(s), affiliat-
  Support Systems                    ation(s), complete address(es), and
* Management Information Systems     telephone number(s).
* Data-Base Systems for           3. The first page should include the
  Decision Support                   title and a 200-word abstract of the
* Knowledge-Based Systems            paper.

                                   SPONSORS
The  Eighteenth  Annual  Hawaii  International Conference on System Science is
sponsored by the University of  Hawaii  and  the  University  of  Southwestern
Louisiana, in cooperation with the ACM and the IEEE Computer Society.

HARDWARE                            All Other Papers
Edmond L. Gallizzi                  Papers not clearly within one of the
HICSS-18 Track Chairperson          aforementioned tracks should be mailed
Eckerd College                      to:
St. Petersberg, FL 33733            Ralph H. Sprague, Jr.
(813) 867-1166                      HICSS-18 Conference Co-chairperson
                                    College of Business Administration
SOFTWARE                            University of Hawaii
Bruce D. Shriver                    2404 Maile Way, E-303
HICSS-18 Track Chairperson          Honolulu, HI 96822
Computer Science Dept.              (808)948-7430
U. of Southwestern Louisiana
P. O. Box 44330
Lafayette, LA 70504                 Conference Co-Chairpersons
(318) 231-6284                      RALPH H. SPRAGUE, JR.
                                    BRUCE D. SHRIVER
DECISION SUPPORT SYSTEM &
KNOWLEDGE-BASED SYSTEMS             Contributing Sponsor Coordinator
Joyce Elam                          RALPH R. GRAMS
HICSS-18 Track Chairperson          College of Medicine
Dept. of General Business           Department of Pathology
BEB 600                             University of Florida
U. of Texas at Austin               Box J-275
Austin, TX 78712                    Gainesville, FL 32610
(512) 471-3322                      (904) 392-4571

MEDICAL INFORMATION PROCESSING      FOR FURTHER INFORMATION
Terry M. Walker                     Concerning Conference Logistics
HICSS-18 Track Chairperson          NEM B. LAU
Computer Science Dept.              HICSS-18 Conference Coordinator
U. of Southwestern Louisiana        Center for Executive Development
P. O. Box 44330                     College of Business Administration
Lafayette, LA 70504                 University of Hawaii
(318) 231-6284                      2404 Maile Way, C-202
                                    Honolulu, HI 96822
                                    (808) 948-7396
                                    Telex: RCA 8216 UHCED    Cable: UNIHAW

The HICSS conference is a non-profit activity organized to provide a forum for
the  interchange of ideas, techniques, and applications among practitioners of
the system sciences.  It maintains objectivity to the systems sciences without
obligation to any commercial  enterprise.   All  attendees  and  speakers  are
expected  to  have  their  respective companies, organizations or universities
bear the costs of their expenses and registration fees.

------------------------------

End of AIList Digest
********************

∂11-Feb-84  2236	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #17
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  22:34:28 PST
Date: Sat 11 Feb 1984 20:58-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #17
To: AIList@SRI-AI


AIList Digest            Sunday, 12 Feb 1984       Volume 2 : Issue 17

Today's Topics:
  Jargon - Glossary of NASA Terminology,
  Humor - Programming Languages
----------------------------------------------------------------------

Date: 23 Jan 84 7:41:17-PST (Mon)
From: hplabs!hao!seismo!flinn @ Ucb-Vax
Subject: Glossary of NASA Terminology

[Reprinted from the Space Digest by permission of the author.
This strikes me as an interesting example of a "natural sublanguage."
It does not reflect the growth and change of NASA jargon, however:
subsequent discussion on the Space Digest indicates that many of the
terms date back eight years and many newer terms are missing.  The
author and others are continuing to add to the list. -- KIL]


        I've been collecting examples of the jargon in common use by
people at NASA Headquarters.  Here is the collection so far:
I have not made any of these up.  I'd be glad to hear of worthy
additions to the collection.

        The 'standard NASA noun modifiers' are nouns used as
adjectives in phrases like 'science community' or 'planetary area.'
Definitions have been omitted for entries whose meaning ought to be
clear.

        -- Ted Flinn

Action Item
Actors in the Program
Ancillary
Ankle: 'Get your ankles bitten' = running into unexpected trouble.
Ant: 'Which ant is steering this log?' = which office is in charge
        of a project.
Appendice (pronounced ap-pen-di-see):  some people, never having
        seen a document with only one appendix, think that this
        is the singular of 'appendices.'
Area:  Always as 'X Area,' where X is one of the standard NASA
        noun modifiers.
Asterick:  pronounced this way more often than not.
Back Burner
Bag It: 'It's in the bag' = it's finished.
Ball of Wax
Baseline: verb or noun.
Basis:  Always as 'X Basis,' where X is one of the standard NASA
         noun modifiers.
Bean Counters:  financial management people.
Bed: 'Completely out of bed' = said of people whose opinions
        are probably incorrect.
Belly Buttons: employees.
Bench Scientists
Bend Metal:  verb, to construct hardware.
Bending Your Pick:  unrewarding activity.
Bent Out of Shape:  disturbed or upset, of a person.
Big Picture
Big-Picture Purposes
Bite the Bullet
Big-Ticket Item: one of the expensive parts.
Black-belt Bureaucrat:  an experienced and knowledgable government
        employee.
Bless: verb, to approve at a high level of management.
Blow One's Skirts Up:  usually negative: 'that didn't blow
        their skirts up' = that didn't upset them.
Blow Smoke:  verb, to obfuscate.
Blown Out of the Water
Bottom Line
Bounce Off: to discuss an idea with someone else.
Brassboard (see Breadboard).
Breadboard (see Brassboard).
Bullet: one of the paragraphs or lines on a viewgraph, which are
         *never* numbered, but always labelled with a bullet.
Bulletize:  to make an outline suitable for a viewgraph.
Bureaucratic Hurdles
Burn:  verb, to score points off a competitor.
Burning Factor:  one of the critical elements.
Calibrate:  verb, to judge the capabilities of people or
              organizations.
Camel's Nose in the Tent
Can of Worms
Canned:  finished, as 'it's in the can.'
Can't Get There From Here.
Capture a Mission:  verb, to construct a launch vehicle for
                        a space flight.
Carve Up the Turkey
Caveat:  usually a noun.
Centers:  'on N-week centers' = at N-week intervals.
Choir, Preaching to the
Clock is Ticking = time is getting short.
Code:  Every section at NASA centers or Headquarters has a label
        consisting of one or more letters or numbers, and in
        conversations or less formal memos, sections are always
        referred to by the code rather than the name:
        Code LI, Code 931, Code EE, etc.
Commonality
Community:  'X Community,' where X is one of the standard NASA
                noun modifiers.
Concept:  'X Concept,' where X is one of the standard NASA
                noun modifiers.
Concur: verb, to agree.
Configure:  verb.
Constant Dollars:  cost without taking inflation into account
        (see Real-Year Dollars).
Contract Out
Core X:  The more important parts of X, where X is one of the
          nouns used as modifiers.
Correlative
Cost-Benefit Tradeoff
Cross-Cut:  verb, to look at something a different way.
Crump:  transitive verb, to cause to collapse.
Crutch: flimsy argument.
Cut Orders:  to fill out a travel order form; left over from the
                days when this was done with mimeograph stencils.
Cutting Edge
Data Base
Data Dump:  a report made to others, usually one's own group.
Data Point:  an item of information.
Debrief:  transitive verb, to report to one's own staff after
            an outside meeting.
Deep Yoghurt:  bad trouble.
Definitize:  verb, to make precise or definite.
De-integrate:  verb, to take apart (not dis-).
De-lid:  verb, to take the top off an instrument.
Delta:  an increment to cost or content.
Descope:  verb, to redesign a project as a result of budget
           cuts (not the opposite of scope, q.v.).
Development Concept
Dialog:  transitive verb.
Disadvantage:  transitive verb.
Disgruntee:  non-NASA person unhappy with program decisions.
Dog's Breakfast
Dollar-Limited
Driver:  an item making up a significant part of cost or
           schedule: 'X is the cost driver.'
Drop-Dead Date:  the real deadline; see 'hard deadline.'
Ducks in a Row
Egg on One's Face
End Item:  product.
End-Run the System
End to End
Extent to Which
Extramural
Facilitize:  verb, to make a facility out of something.
Factor in:  verb.
Feedback:  reaction of another section or organization to
             a proposition.
Fill This Square
Finalize
Finesse The System
First Cut:  preliminary estimate.
Fiscal Constraints
Flag:  verb, to make note of something for future reference.
Flagship Program
Flex the Parameters
Flux and Change
What Will Fly:  'see it if will fly.'
Folded In:  taken into account.
Forest: miss the f. for the trees.
Forgiving, unforgiving:  of a physical system.
Front Office
Full-Up:  at peak level.
Future:  promise or potential, as, 'a lot of potential future.'
Futuristic
Gangbusters
Glitch
Grease the Skids
Green Door:  'behind the green door' = in the Administrator's offices.
Go to Bat For
Goal:  contrasted to 'objective,' q.v.
Grabber
Gross Outline:  approximation.
Ground Floor
Group Shoot = brainstorming session.
Guidelines:  always desirable to have.
Guy:  an inanimate object such as a data point.
Hack:  'get a hack on X' = make some kind of estimate.
Hard Copy:  paper, as contrasted to viewgraphs.
Hard Deadline:  supposed deadline; never met.
Hard Over:  intransigent.
Head Counters:  personnel office staff.
Hit X Hard:  concentrate on X.
Hoop:  a step in realizing a program:  'yet to go through this hoop.'
Humanoid
Hypergolic:  of a person: intransigent or upset in general.
Impact:  verb.
Implement:  verb.
In-House
Initialize
Innovative
Intensive:  always as X-intensive.
Intercompare:  always used instead of 'compare.'
Issue:  always used instead of 'problem.'
Key:  adj., of issues:  'key issue; not particularly key'.
Knickers:  'get into their knickers' = to interfere with them.
Laicize: verb, to describe in terms comprehensible to lay people.
Lashup = rackup.
Lay Track:  to make an impression on management ('we laid a lot
                of track with the Administrator').
Learning Curve
Liaise:  verb.
Limited:  always as X-limited.
Line Item
Link Calculation
Liberate Resources:  to divert funds from something else.
Looked At:  'the X area is being looked at' = being studied.
Loop:  to be in the loop = to be informed.
Love It!   exclamation of approval.
Low-Cost
Machine = spacecraft.
Man-Attended Experiment
Marching Orders
Matrix
Micromanagement = a tendency to get involved in management of
                        affairs two or more levels down from
                        one's own area of responsibility.
Milestone
Mission Definition
Mode:  'in an X mode.'
Model-Dependent
Muscle:  'get all the muscle into X'
Music:  'let's all read from the same sheet of music.'
Necessitate
Nominal:  according to expectation.
Nominative:  adj., meaning unknown.
Nonconcur:  verb, to disagree.
Numb Nut:  unskilled or incapable person.
Objective:  as contrasted with 'goal' (q.v.)
Overarching Objective
Oblectation
Off-Load:  verb.
On Board:  'Y is on board' = the participation of Y is assured.
On-Boards:  employees or participants.
On Leave:  on vacation.
On the Part Of
On Travel:  out of town.
Open Loop
Out-of-House
Over Guidelines
Ox:  'depends on whose ox is gored.'
Package
Paradigm
Parking Orbit:  temporary assignment or employment.
Pathfinder Studies
Pedigree:  history of accumulation of non-NASA support for a mission.
Peg to Hang X On
Pie:  'another slice through this same pie is...'
Piece of the Action
Ping On:  verb, to remind someone of something they were
           supposed to do.
Pitch:  a presentation to management.
Placekeeper
Planning Exercise
Pony in This Pile of Manure Somewhere = some part of this mess
        may be salvageable.
Posture
Pre-Posthumous
Prioritize
Priority Listing
Problem Being Worked:  'we're working that problem.'
Problem Areas
Product = end item.
Programmatic
Pucker Factor:  degree of apprehension.
Pull One's Tongue Through One's Nose:  give someone a hard time.
Pulse:  verb, as, 'pulse the system.'
Quick Look
Rackup = lashup.
Rainmaker:  an employee able to get approval for budget increases
                or new missions.
Rapee: a person on the receiving end of an unfavorable decision.
Rattle the Cage:  'that will rattle their cage.'
Real-Year Dollars: cost taking inflation into account, as
        contrasted with 'constant dollars.'
Reclama
Refugee:  a person transferred from another program.
Report Out:  verb, used for 'report.'
Resources = money.
Resource-Intensive = expensive.
ROM: 'rough order of magnitude,' of estimates.
Rubric
Runout
Sales Pitch
Scenario
Scope:  verb, to attempt to understand something.
Scoped Out:  pp., understood.
Secular = non-scientific or non-technological.
Self-Serving
Sense:  noun, used instead of 'consensus.'
Shopping List
Show Stopper
Sign Off On something = approve.
Space Cadets:  NASA employees.
Space Winnies or Wieners:  ditto, but even more derogatory.
X-Specific
Speak to X:  to comment on X, where X is a subject, not a person.
Specificity
Speed, Up To
Spinning One's Wheels
Spooks:  DOD of similar people from other agencies.
Staff:  verb.
Standpoint:  'from an X standpoint'
Statussed:  adj., as, 'that has been statussed.'
Strap On:  verb, to try out:  'strap on this idea...'
Strawman
String to One's Bow
Street, On The:  distributed outside one's own office.
Stroking
Structure: verb.
Subsume
Success-Oriented:  no provision for possible trouble.
Surface:  verb, to bring up a problem.
Surveille: verb.
Suspense Date:  the mildest form of imaginary deadline.
Tail:  to have one's tail in a crack = to be upset or in trouble.
Tall Pole in the Tent:  data anomaly.
Tar With the Same Brush
On Target
Task Force
Team All Set Up
Tickler = reminder.
Tiger Team
Time-Critical:  something likely to cause schedule trouble.
Time Frame
Torque the System
Total X, where X is one of the standard NASA noun modifiers.
Total X Picture
Truth Model
Unique
Update:  noun or verb.
Up-Front:  adj.
Upscale
Upper Management
Vector:  verb.
Vector a Program:  to direct it toward some objective.
Ventilate the Issues:  to discuss problems.
Versatilify:  verb, to make something more versatile.
Viable: adj., something that might work or might be acceptable.
Viewgraph:  always mandatory in any presentation.
Viz-a-Viz
WAG = wild-assed guess.
Wall to Wall:  adj., pervasive.
Watch:  'didn't happen on my watch...'
Water Off a Duck's Back
Waterfall Chart:  one way of present costs vs. time.
I'm Not Waving, I'm Drowning
Wedge; Planning Wedge:  available future-year money.
Been to the Well
Where Coming From
Whole Nine Yards
X-Wide
X-wise
Workaround:  way to overcome a problem.
Wrapped Around the Axle:  disturbed or upset.

------------------------------

Date: Wed 8 Feb 84 07:14:34-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: The Best Languages in Town!!! (forwarded from USENET)

                [Reprinted from the UTexas-20 bboard.]

From: bradley!brad    Feb  6 16:56:00 1984

                               Laidback with (a) Fifth
                               By  John Unger Zussman
                            From Info World, Oct 4, 1982


              Basic, Fortran, Cobol... These programming Languages are well
          known  and (more or less)  well loved throughout the computer in-
          dustry.  There are numerous other languages,  however,  that  are
          less  well  known yet still have ardent devotees.  In fact, these
          little-known languages generally have the most fanatic  admirers.
          For  those  who wish to know more about these obscure languages -
          and why they are obscure - I present the following catalog.

              SIMPLE ... SIMPLE is an acronym for Sheer Idiot's  Mono  Pur-
          pose   Programming   Lingusitic   Environment.    This  language,
          developed at the Hanover College for Technological  Misfits,  was
          designed  to  make it impossible to write code with errors in it.
          The statements are, therefore confined to BEGIN, END,  and  STOP.
          No matter how you arrange the statements, you can't make a syntax
          error.

              Programs written in  SIMPLE  do  nothing  useful.  Thus  they
          achieve  the  results  of  programs  written  in  other languages
          without the tedious, frustrating process of  testing  and  debug-
          ging.

              SLOBOL ... SLOBOL is best known for the speed, or lack of it,
          of  its  compiler.   Although  many compilers allow you to take a
          coffee break while they compile, SLOBOL compilers  allow  you  to
          take  a  trip to Bolivia to pick up the coffee.  Forty-three pro-
          grammers are known to have died of boredom sitting at their  ter-
          minals while waiting for a SLOBOL program to compile.  Weary SLO-
          BOL programmers often turn to a related (but  infinitely  faster)
          language, COCAINE.

              VALGOL ... (With special thanks to Dan and Betsy "Moon  Unit"
          Pfau)  -  From its modest beginnings in southern California's San
          Fernando Valley, VALGOL is enjoying a dramatic surge of populari-
          ty across the industry.

              VALGOL commands include REALLY, LIKE, WELL and Y$KNOW.  Vari-
          ables are assigned with the  =LIKE and =TOTALLY operators.  Other
          operators include the "CALIFORNIA BOOLEANS", FERSURE, and  NOWAY.
          Repetitions of code are handled in FOR-SURE loops. Here is a sam-
          ple VALGOL program:
                    14 LIKE, Y$KNOW (I MEAN) START
                    %% IF
                    PI A =LIKE BITCHEN AND
                    01 B =LIKE TUBULAR AND
                    9  C =LIKE GRODY**MAX
                    4K (FERSURE)**2
                    18 THEN
                    4I FOR I=LIKE 1 TO OH MAYBE 100
                    86 DO WAH + (DITTY**2)
                    9  BARF(I) =TOTALLY GROSS(OUT)
                    -17 SURE
                    1F LIKE BAG THIS PROGRAM
                    ?  REALLY
                    $$ LIKE TOTALLY (Y*KNOW)

              VALGOL is characterized by  its  unfriendly  error  messages.
          For  example, when the user makes a syntax error, the interpreter
          displays the message, GAG ME WITH A SPOON!

              LAIDBACK ... Historically, VALGOL is a  derivative  of  LAID-
          BACK,  which  was  developed  at  the  (now defunct) Marin County
          Center for T'ai Chi, Mellowness, and Computer Programming, as  an
          alternative uo the more intense atmosphere in nearby silicon val-
          ley.

              The center was ideal for programmers who liked to soak in hot
          tubs  while  they  worked.   Unfortunately, few programmers could
          survive there for long, since the center outlawed  pizza  and  RC
          Cola in favor of bean curd and Perrier.

              Many mourn the demise of LAIDBACK because of  its  reputation
          as  a  gentle  and nonthreatening language. For Example, LAIDBACK
          responded to syntax errors with the message, SORRY MAN,  I  CAN'T
          DEAL WITH THAT.

              SARTRE ... Named  after  the  late  existential  philosopher.
          SARTRE  is an extremely unstructured language. Statements in SAR-
          TRE have no purpose; they just are there. Thus,  SARTRE  programs
          are  left to define their own functions.  SARTRE programmers tend
          to be boring and depressed and are no fun at parties.

              FIFTH ... FIFTH is a precision mathematical language in which
          the  data types refer to quantity.  The data types range from CC,
          OUNCE,  SHOT,  and  JIGGER  to  FIFTH  (hence  the  name  of  the
          language),  LITER,  MAGNUM,  and  BLOTTO.   Commands refer to in-
          gredients such as CHABLIS, CHARDONNAY, CABERNET,  GIN,  VERMOUTH,
          VODKA, SCOTCH and WHATEVERSAROUND.

              The many versions of the FIFTH language reflect the sophisti-
          cation  and financial status of its users.  Commands in the ELITE
          dialect include VSOP and LAFITE, while commands in the GUTTER di-
          alect  include  HOOTCH  and  RIPPLE.  The latter is a favorite of
          frustrated FORTH programmers who end up using the language.

              C- ... This language was named for the grade received by  its
          creator  when  he  submitted  it as a class project in a graduate
          programming class.  C- is best described as  a  "Low-Level"  pro-
          gramming language.  In fact, the language generally requires more
          C- statements than machine-code statements  to  execute  a  given
          task.  In this respect, it is very similar to COBOL.

              LITHP  ...  This  otherwise  unremarkable  labuage  is   dis-
          tinguished  by  the absence of an "s" in its character set.  pro-
          grammers and users must substitute "TH". LITHP is said to  useful
          in prothething lithtth.

              DOGO ... Developed at the Massachussettes Institute of Obedi-
          ence Training.  DOGO heralds a new era of computer-literate pets.
          DOGO commands include SIT, STAY, HEEL and ROLL OVER.  An  innova-
          tive feature of DOGO is "PUPPY GRAPHICS", in which a small cocker
          spaniel occasionally leaves a deposit as he  travels  across  the
          screen.

                              Submitted By Ian and Tony Goldsmith

------------------------------

End of AIList Digest
********************

∂11-Feb-84  2320	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #18
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  23:19:50 PST
Date: Sat 11 Feb 1984 21:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #18
To: AIList@SRI-AI


AIList Digest            Sunday, 12 Feb 1984       Volume 2 : Issue 18

Today's Topics:
  AI and Meteorology -  Summary of Responses
----------------------------------------------------------------------

Date: 11 Jan 84 16:07:00-PST (Wed)
From: ihnp4!fortune!rpw3 @ Ucb-Vax
Subject: Re: AI and Weather Forecasting - (nf)
Article-I.D.: fortune.2249

As far as the desirability to use AI on the weather, it seems a bit
out of place, when there is rumoured to be a fairly straightforward
(if INCREDIBLY cpu-hungry) thermodynamic relaxation calculation that
gives very good results for 24 hr prediction. It uses as input the
various temperature, wind, and pressure readings from all of the U.S.
weather stations, including the ones cleverly hidden away aboard most
domestic DC-10's and L-1011's. Starting with those values as boundary
conditions, an iterative relaxation is done to fill in the cells of
the continental atmospheric model.

The joke is of course (no joke!), it takes 26 hrs to run on a Illiac IV
(somebody from Ames or NOAS or somewhere correct me, please). The accuracy
goes up as the cell size in the model goes down, but the runtime goes up as
the cube! So you can look out the window, wait 2 hours, and say, "Yup,
the model was right."

My cynical prediction is that either (1) by the time we develop an
AI system that does as well, the deterministic systems will have
obsoleted it, or more likely (2) by the time we get an AI model with
the same accuracy, it will take 72 hours to run a 24 hour forecast!

Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

Date: 19 Jan 84 21:52:42-EST (Thu)
From: ucbtopaz!finnca1 @ Ucb-Vax
Subject: Re: "You cant go home again"
Article-I.D.: ucbtopaz.370

It seems to me (a phrase that is always a copout for the ill-informed;
nonetheless, I proceed) that the real payoff in expert systems for weather
forecasting would be to capture the knowledge of those pre-computer experts who,
with limited data and even fewer dollars, managed to develop their
pattern-recognition facilities to the point that they could FEEL what was
happening and forecast accordingly.

I was privileged to take some meteorology courses from such an oldster many
years ago, and it was, alas,  my short-sightedness about the computer revolution
in meteorology that prevented me from capturing some of his expertise, to
buzz a word or two.

Surely not ALL of these veterans have retired yet...what a service to science
someone would perform if only this experise could be captured before it dies
off.

        ...ucbvax!lbl-csam!ra!daven    or
        whatever is on the header THIS time.

------------------------------

Date: 15 Jan 84 5:06:29-PST (Sun)
From: hplabs!zehntel!tektronix!ucbcad!ucbesvax.turner @ Ucb-Vax
Subject: Re: Re: You cant go home again - (nf)
Article-I.D.: ucbcad.1315

Re: finnca1@topaz's comments on weather forecasting

Replacing expertise with raw computer power has its shortcomings--the
"joke" of predicting the weather 24 hours from now in 26 hours of cpu
time is a case in point.  Less accurate but more timely forecasts used
to be made by people with slide-rules--and where are these people now?

It wouldn't surprise me if the 20th century had its share of "lost arts".
Archaelogists still dig up things that we don't know quite how to make,
and the technological historians of the next century might well be faced
with the same sorts of puzzles when reading about how people got by
without computers.

Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: Wed 8 Feb 84 15:29:01-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Summary of Responses

The following is a summary of the responses to my AIList request for
information on AI and meteorology, spatial and temporal reasoning, and
related matters.  I have tried to summarize the net messages accurately,
but I may have made some unwarranted inferences about affiliations,
gender, or other matters that were not explicit in the messages.

The citations below should certainly not be considered comprehensive,
either for the scientific literature as a whole or for the AI literature.
There has been relevant work in pattern recognition and image understanding
(e.g., the work at SRI on tracking clouds in satellite images), mapping,
database systems, etc.  I have not had time to scan even my own collection
of literature (PRIP, CVPR, PR, PAMI, IJCAI, AAAI, etc.) for relevant
articles, and I have not sought out bibliographies or done online searches
in the traditional meteorological literature.  Still, I hope these
comments will be of use.

                        ------------------

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
reports that he and Alistair Frazer (Penn State Meteo Dept.) are advising
two meteorology/CS students who want to do senior/masters theses in AI.
They have submitted a proposal and expect to hear from NSF in a few months.


Capt. Roslyn (Roz) J. Taylor, Applied AI Project Officer, USAF, @RADC,
has read two of the Gaffney/Racer papers entitled "A Learning Interpretive
Decision Algorithm for Severe Storm Forecasting."  She found the algorithm
to be a "fuzzy math"-based fine-tuning algorithm in much the same spirit
as a Kalman filter.  The algorithm might be useful as the numerical
predictor in an expert system.


Jay Glicksman of the Texas Instruments Computer Science Lab suggests
that we check out

  Kawaguchi, E. et al. (1979)
  An Understanding System of Natural Language and Pictorial Pattern in
  the World of Weather Reports
  IJCAI-6 Tokyo, pp. 469-474

It does not provide many details and he has not seen a follow up, but
the paper may give some leads.  This paper is evidently related to the
Taniguchi et al. paper in the 6th Pat. Rec. proceedings that I mentioned
in my query.

Dr. John Tsotsos and his students at the Univ. of Toronto Laboratory for
Computational Medicine have been working for several years on the ALVEN
system to interpret heart images in X-ray films.  Dr. Tsotsos feels that the
spatial and temporal reasoning capabilities of the system would be of use in
meteorology.  The temporal reasoning includes intervals, points,
hierarchies, and temporal sampling considerations.  He has sent me the
following reports:

  R. Gershon, Y. Ali, and M. Jenkin, An Explanation System for Frame-based
  Knowledge Organized Along Multiple Dimensions, LCM-TR83-2, Dec. 1983.

  J.K. Tsotsos, Knowledge Organization: Its Role in Representation,
  Decision-making and Explanation Schemes for Expert Systems, LCM-TR83-3,
  Dec. 1983.

  J.K. Tsotsos, Representational Axes and Temporal Cooperative Processes,
  Preliminary Draft.

I regret that I have found time for only a cursory examination of these papers,
and so cannot say whether they will be useful in themselves for meteorology
or only as a source of further references in spatial and temporal reasoning.
Someone else in my group is now taking a look at them. Others papers from
Dr. Tsotsos group may be found in: IJACI77-79-81, PRIP81, ICPR82, PAMI Nov.80,
and IEEE Computer Oct. 83.


Stuart C. Shapiro at the Univ. of Buffalo (SUNY) CS Dept. added the
following reference on temporal reasoning:

  Almeida, M. J., and Shapiro, S. C., Reasoning about the temporal
  structure of narrative texts.  Proceedings of the Fifth Annual Meeting
  of the Cognitive Science Society, Rochester, NY, 1983.


Fanya S. Montalvo at MIT echoed my interest in

  * knowledge representations for spatial/temporal reasoning;
  * inference methods for estimating meteorological variables
    from (spatially and temporally) sparse data;
  * methods of interfacing symbolic knowledge and heuristic
    reasoning with numerical simulation models;
  * a bibliography or guide to relevant literature.

She reports that good research along these lines is very scarce, but
suggests the following:

  As far as interfacing symbolic knowlege with heuristic reasoning with
  numerical simulation, Weyhrauch's FOL system is the best formalism I've
  seen/worked-with to do that.  Unfortunately there are few references to it.
  One is Filman, Lamping, & Montalvo in IJCAI'83.  Unfortunately it was too
  short.  There's a reference to Weyhrauch's Prolegomena paper in there.  Also
  there is Wood's, Greenfeld's, and Zdybel's work at BBN with KLONE and a ship
  location database; they're no longer there.  There's also Mark Friedell's
  Thesis from Case Western Reserve; see his SIGGRAPH'83 article, also
  references to Greenfeld & Yonke there.  Oh, yes, there's also Reid Simmons,
  here at MIT, on a system connecting diagrams in geologic histories with
  symbolic descriptions, AAAI'83.  The work is really in bits and pieces and
  hasn't really been put together as a whole working formalism yet.  The
  issues are hard.


Jim Hendler at Brown reports that Drew McDermott has recently written
several papers about temporal and spatial reasoning.  The best one on
temporal reasoning was published in Cognitive Science about a year ago.
Also, one of Drew's students at Yale recently did a thesis on spatial
reasoning.


David M. Axler, MSCF Applications Manager at Univ. of Pennsylvania, suggests:

  A great deal of info about weather already exists in a densely-encoded form,
  namely proverbs and traditional maxims.  Is there a way that this system can
  be converted to an expert system, if for no other reason than potential
  comparison between the analysis it provides with that gained from more
  formal meteorological approaches?

  If this is of interest, I can provide leads to collections of weather lore,
  proverbs, and the like.  If you're actually based at SRI, you're near
  several of the major folklore libraries and should have relatively easy
  access (California is the only state in the union with two grad programs in
  the field, one at Berkeley (under the anthro dept.), and one at UCLA) to the
  material, as both schools have decent collections.

I replied:

  The use of folklore maxims is a good idea, and one fairly easy to build
  into an expert system for prediction of weather at a single site.  (The
  user would have to enter observations such as "red sky at night" since
  pattern recognition couldn't be used.  Given that, I suspect that a
  Prospector-style inference net could be built that would simultaneously
  evaluate hypotheses of "rain", "fog", etc., for multiple time windows.)
  Construction of the system and evaluation of the individual rules would
  make an excellent thesis project.

  Unfortunately, I doubt that the National Weather Service or other such
  organization would be interested in having SRI build such a "toy"
  system.  They would be more interested in methods for tracking storm
  fronts and either automating or improving on the map products they
  currently produce.

  As a compromise, one project we have been considering is to automate
  a book of weather forecasting rules for professional forecasters.
  Such rule books do exist, but the pressures of daily forecasting are
  such that the books are rarely consulted.  Perhaps some pattern
  recognition combined with some man-machine dialog could trigger the
  expert system rules that would remind the user of relevant passages.

Dave liked the project, and suggested that there may be additional unofficial
rule sources such as those used by the Farmer's Almanac publishers.


Philip Kahn at UCLA is interested in pattern recognition, and recommends
the book

  REMOTE SENSING: Optics and Optical Systems by Philip N. Slater
  Addison-Wesley Publ. Co., Reading, MA, 1980

for information on atmospherics, optics, films, testing/reliability, etc.


Alex Pang at UCLA is doing some non-AI image processing to aid weather
prediction.  He is interested in hearing about AI and meteorology.
Bill Havens at the University of British Columbia expressed interest,
particularly in methods that could be implemented on a personal computer.
Mike Uschold at Edinburgh and Noel Kropf at Columbia University (Seismology
Lab?) have also expressed interest.

                        ------------------

My thanks to all who replied.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂15-Feb-84  2052	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #19
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Feb 84  20:52:24 PST
Date: Tue 14 Feb 1984 17:27-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #19
To: AIList@SRI-AI


AIList Digest           Wednesday, 15 Feb 1984     Volume 2 : Issue 19

Today's Topics:
  Requests - OPS5 & IBM LISP,
  LISP - Timings,
  Bindings - G. Spencer-Brown,
  Knowledge Acquisition - Regrets,
  Alert - 4-Color Problem,
  Brain Theory - Definition,
  Seminars - Analogy & Causal Reasoning & Tutorial Discourse
----------------------------------------------------------------------

Date: Mon 13 Feb 84 10:06:53-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: OPS5 query

I'd like to find out some information on acquiring a copy of
the OPS5 system. Is there a purchase price, is it free-of-charge,
etc. Please send replies to

        G.TJM@SU-SCORE

Thanks.

--ted

------------------------------

Date: 1 Feb 1984 15:14:48 EST
From: Robert M. Simmons <simmons@EDN-UNIX>
Subject: lisp on ibm

Can anyone give me pointers to LISP systems that run on
IBM 370's under MVS?  Direct and indirect pointers are
welcome.

Bob Simmons
simmons@edn-unix

------------------------------

Date: 11 Feb 84 17:54:24 EST
From: John <Roach@RUTGERS.ARPA>
Subject: Timings of LISPs and Machines


I dug up these timings, they are a little bit out of date but seem a little
more informative.  They were done by Dick Gabriel at SU-AI in 1982 and passed
along by Chuck Hedrick at Rutgers.  Some of the times have been updated to
reflect current machines by myself.  These have been marked with the
date of 1984.  All machines were measured using the function -

an almost Takeuchi function as defined by John McCarthy

(defun tak (x y z)
       (cond ((not (< y x))
              z)
             (t (tak (tak (1- x) y z)
                     (tak (1- y) z x)
                     (tak (1- z) x y)))))

------------------------------------------

(tak 18. 12. 6.)

On 11/750 in Franz ordinary arith     19.9   seconds compiled
On 11/780 in Franz with (nfc)(TAKF)   15.8   seconds compiled   (GJC time)
On Rutgers-20 in Interlisp/1984       13.8   seconds compiled
On 11/780 in Franz (nfc)               8.4   seconds compiled   (KIM time)
On 11/780 in Franz (nfc)               8.35  seconds compiled   (GJC time)
On 11/780 in Franz with (ffc)(TAKF)    7.5   seconds compiled   (GJC time)
On 11/750 in PSL, generic arith        7.1   seconds compiled
On MC (KL) in MacLisp (TAKF)           5.9   seconds compiled   (GJC time)
On Dolphin in InterLisp/1984           4.81  seconds compiled
On Vax 11/780 in InterLisp (load = 0)  4.24  seconds compiled
On Foonly F2 in MacLisp                4.1   seconds compiled
On Apollo (MC68000) PASCAL             3.8   seconds            (extra waits?)
On 11/750 in Franz, Fixnum arith       3.6   seconds compiled
On MIT CADR in ZetaLisp                3.16  seconds compiled   (GJC time)
On MIT CADR in ZetaLisp                3.1   seconds compiled   (ROD time)
On MIT CADR in ZetaLisp (TAKF)         3.1   seconds compiled   (GJC time)
On Apollo (MC68000) PSL SYSLISP        2.93  seconds compiled
On 11/780 in NIL (TAKF)                2.8   seconds compiled   (GJC time)
On 11/780 in NIL                       2.7   seconds compiled   (GJC time)
On 11/750 in C                         2.4   seconds
On Rutgers-20 in Interlisp/Block/84    2.225 seconds compiled
On 11/780 in Franz (ffc)               2.13  seconds compiled   (KIM time)
On 11/780 (Diablo) in Franz (ffc)      2.1   seconds compiled   (VRP time)
On 11/780 in Franz (ffc)               2.1   seconds compiled   (GJC time)
On 68000 in C                          1.9   seconds
On Utah-20 in PSL Generic arith        1.672 seconds compiled
On Dandelion in Interlisp/1984         1.65  seconds compiled
On 11/750 in PSL INUM arith            1.4   seconds compiled
On 11/780 (Diablo) in C                1.35  seconds
On 11/780 in Franz (lfc)               1.13  seconds compiled   (KIM time)
On UTAH-20 in Lisp 1.6                 1.1   seconds compiled
On UTAH-20 in PSL Inum arith           1.077 seconds compiled
On Rutgers-20 in Elisp                 1.063 seconds compiled
On Rutgers-20 in R/UCI lisp             .969 seconds compiled
On SAIL (KL) in MacLisp                 .832 seconds compiled
On SAIL in bummed MacLisp               .795 seconds compiled
On MC (KL) in MacLisp (TAKF,dcl)        .789 seconds compiled
On 68000 in machine language            .7   seconds
On MC (KL) in MacLisp (dcl)             .677 seconds compiled
On SAIL in bummed MacLisp (dcl)         .616 seconds compiled
On SAIL (KL) in MacLisp (dcl)           .564 seconds compiled
On Dorado in InterLisp Jan 1982 (tr)    .53  seconds compiled
On UTAH-20 in SYSLISP arith             .526 seconds compiled
On SAIL in machine language             .255 seconds (wholine)
On SAIL in machine language             .184 seconds (ebox-doesn't include mem)
On SCORE (2060) in machine language     .162 seconds (ebox)
On S-1 Mark I in machine language       .114 seconds (ebox & ibox)

I would be interested if people who had these machines/languages available
could update some of the timings.  There also isn't any timings for Symbolics
or LMI.

John.

------------------------------

Date: Sun, 12 Feb 1984  01:14 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: AIList Digest   V2 #14

In regard to G Spencer Brown, if you are referring to author of
the Laws of Form, if that's what it was called:  I believe he was
a friend of Bertrand Russell  and that he logged out
quite a number of years ago.

------------------------------

Date: Sun, 12 Feb 84 14:18:04 EST
From: Brint <abc@brl-bmd>
Subject: Re:  "You cant go home again"

I couldn't agree more (with your feelings of regret at not
capturing the expertise of the "oldster" in meterological
lore).

My dad was one of the best automotive diagnosticians in
Baltimore until his death six years ago.  His uncanny
ability to pinpoint a problem's cause from external
symptoms was locally legendary.  Had I known then what I'm
beginning to learn now about the promise of expert systems,
I'd have spent many happy hours "picking his brain" with
the (unfilled) promise of making us both rich!

------------------------------

Date: Mon 13 Feb 84 22:15:08-EST
From: Jonathan Intner <INTNER@COLUMBIA-20.ARPA>
Subject: The 4-Color Problem

To Whom It May Concern:

        The computer proof of the 4 - color problem can be found in
Appel, K. and W. Haken ,"Every planar map is 4-colorable-1 :
Discharging", "Every planar map is 4-colorable-2: Reducibility",
Illinois Journal of Mathematics, 21, 429-567 (1977).  I haven't looked
at this myself, but I understand from Mike Townsend (a Prof here at
Columbia) that the proof is a real mess and involves thousands of
special cases.

        Jonathan Intner
        INTNER@COLUMBIA-20.ARPA

------------------------------

Date: 11 Feb 1984 13:50-PST
From: Andy Cromarty <andy@AIDS-Unix>
Subject: Re: Brain, a parallel processor?

        What are the evidences that the brain is a parallel processor?
        My own introspection seem to indicate that mine is doing time-sharing.
                        -- Rene Bach <BACH@SUMEX-AIM.ARPA>

You are confusing "brain" with "mind".

------------------------------

Date: 10 Feb 1984  15:23 EST (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Revolving Seminar

                     [Forwarded by SASW@MIT-MC.]

Wednesday, February 15, 4:00pm 8th floor playroom

Structure-Mapping: A Theoretical Framework for Analogy
Dedre Gentner

The structure-mapping theory of analogy describes a set of
principles by which the interpretation of an analogy is derived
from the meanings of its terms.  These principles are
characterized as implicit rules for mapping knowledge about a
base domain into a target domain.  Two important features of the
theory are (1) the rules depend only on syntactic properties of
the knowledge representation, and not on the specific content of
the domains; and (2) the theoretical framework allows analogies
to be distinguished cleanly from literal similarity statements,
applications of general laws, and other kinds of comparisons.

Two mapping principles are described: (1) Relations between
objects, rather than attributes of objects, are mapped from base
to target; and (2) The particular relations mapped are determined
by @u(systematicity), as defined by the existence of higher-order
relations.  Psychological experiments supporting the theory are
described, and implications for theories of learning are
discussed.


COMING SOON: Tomas Lozano-Perez, Jerry Barber, Dan Carnese, Bob Berwick, ...

------------------------------

Date: Mon 13 Feb 84 09:15:36-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT - FEBRUARY 24, 1984

[Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   February 24, 1984
LOCATION: Chemistry Gazebo, between Physical & Organic Chemistry
12:05

SPEAKER:   Ben Kuipers, Department of Mathematics
           Tufts University

TOPIC:     Studying Experts to Learn About Qualitative
                       Causal Reasoning


By analyzing a  verbatim protocol  of an expert's  explanation we  can
derive constraints on the conceptual  framework used by human  experts
for causal reasoning  in medicine.   We use  these constraints,  along
with  textbook  descriptions  of  physiological  mechanisms  and   the
computational requirements  of successful  performance, to  propose  a
model of qualitative causal reasoning.  One important design  decision
in the model is the selection of the "envisionment" version of  causal
reasoning  rather  than  a  version  based  on  "causal  links."   The
envisionment process performs a qualitative simulation, starting  with
a description  of the  structure  of a  mechanism and  predicting  its
behavior.  The qualitative causal reasoning algorithm is a step toward
second-generation medical diagnosis programs  that understand how  the
mechanisms of  the  body work.   The  protocol analysis  method  is  a
knowledge  acquisition  technique   for  determining  the   conceptual
framework of new  types of  knowledge in  an expert  system, prior  to
acquiring large amounts of domain-specific knowledge.  The qualitative
causal reasoning algorithm has been implemented and tested on  medical
and non-medical examples.  It will be the core of RENAL, a new  expert
system for diagnosis in nephrology, that we are now developing.

------------------------------

Date: 12 Feb 84 0943 EST (Sunday)
From: Alan.Lesgold@CMU-CS-A (N981AL60)
Subject: colloquium announcement

          [Forwarded from the CMU-C bboard by Laws@SRI-AI.]


                 THE INTELLIGENT TUTORING SYSTEM GROUP
                LEARNING RESEARCH AND DEVELOPMENT CENTER
                        UNIVERSITY OF PITTSBURGH

                          AN ARCHITECTURE FOR
                           TUTORIAL DISCOURSE

                            BEVERLY P. WOOLF
              COMPUTER AND INFORMATION SCIENCE DEPARTMENT
                      UNIVERSITY OF MASSACHUSETTS

                        WEDNESDAY, FEBRUARY 15,
              2:00 - 3:00, LRDC AUDITORIUM (SECOND FLOOR)

    Human  discourse is quite complex compared to the present ability of
machines to handle communication.  Sophisticated research into discourse
is needed before we can construct intelligent interactive systems.  This
talk presents recent research in the areas of discourse generation, with
emphasis on teaching and tutoring dialogues.
    This talk describes MENO, a system where hand  tailored  rules  have
been  used  to  generate  flexible  responses  in  the  face  of student
failures.  The  system  demonstrates  the  effectiveness  of  separating
tutoring  knowledge  and  tutoring  decisions  from  domain  and student
knowledge.  The design of  the  system  suggests  a  machine  theory  of
tutoring and uncovers some of the conventions and intuitions of tutoring
discourse.    This  research  is applicable to any intelligent interface
which must reason about the users knowledge.

------------------------------

End of AIList Digest
********************

∂22-Feb-84  1137	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #20
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Feb 84  11:36:51 PST
Date: Fri 17 Feb 1984 09:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #20
To: AIList@SRI-AI


AIList Digest            Friday, 17 Feb 1984       Volume 2 : Issue 20

Today's Topics:
  Lisp - Timing Data Caveat,
  Bindings - G. Spencer Brown,
  Logic - Nature of Undecidability,
  Brain Theory - Parallelism,
  Expert Systems - Need for Perception,
  AI Culture - Work in Progress,
  Seminars - Learning & Automatic Deduction & Commonsense Reasoning
----------------------------------------------------------------------

Date: 16 Feb 1984 1417-PST
From: VANBUER at USC-ECL.ARPA
Subject: Timing Data Caveat

A warning on the TAK performance testing:  this code only exercises
function calling and small integer arithmetic, and none of things
most heavily used in "real" lisp programming: CONSing, garbage collection,
paging (ai stuff is big after all).
        Darrel J. Van Buer

------------------------------

Date: Wed, 15 Feb 84 11:15:21 EST
From: John McLean <mclean@NRL-CSS>
Subject: G. Spencer-Brown and undecidable propositions


G. Spencer-Brown is very much alive.  He spent several months at NRL a couple
of years ago and presented lectures on his purported proof of the four color
theorem.  Having heard him lecture on several topics previously, I did not feel
motivated to attend his lectures on the four color theorem so I can't comment
on them first hand.  Those who knew him better than I believe that he is
currently at Oxford or Cambridge.  By the way, he was not a friend of Russell's
as far as I know.  Russell merely said something somewhat positive about LAWS
OF FORM.

With respect to undecidability, I can't figure out what Charlie Crummer means
by "undecidable proposition".  The definition I have always seen is that a
proposition is undecidable with respect to a set of axioms if it is
independent, i.e,. neither the proposition nor its negation is provable.
(An undecidable theory is a different kettle of fish altogether.) Examples are
Euclid's 5th postulate with respect to the other 4, Goedel's sentence with
respect to first order number theory, the continuum hypothesis with respect to
set theory, etc.  I can't figure out the claim that one can't decide whether
an undecidable proposition is decidable or not.  Euclid's 5th postulate,
Goedel's sentence, and the continuum hypothesis have been proven to be
undecidable.  For simple theories, such as sentential logic (i.e., no
quantifiers), there are even algorithms for detecting undecidability.
                                                                    John McLean

------------------------------

Date: Wed, 15 Feb 84 11:18:43 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: G. Spencer-Brown and undecidable propositions

Thanks for the lead to G. S-B.  I think I understand what he is driving at with
THE LAWS OF FORM so I would like to see his alledged 4-color proof.

Re: undecidability... Is it true that all propositions can be proved decidable
or not with respect to a particular axiomatic system from WITHIN that system?
My understanding is that this is not generally possible.  Example (Not a proof
of my understanding):  Is the value of the statement "This statement is false."
decidable from within Boolean logic?  It seems to me that from within Boolean
logic, i.e. 2-valued logic, all that would be seen is that no matter how long
I crank I never seem to be able to settle down to a unique value.  If this
proposition is fed to a 2-valued logic program (written in PROLOG, LISP, or
whatever language one desires) the program just won't halt.  From OUTSIDE the
machine, a human programmer can easily detect the problem but from WITHIN
the Boolean system it's not possible.  This seems to be an example of the
halting problem.

--Charlie

------------------------------

Date: 16 Feb 1984  12:22 EST (Thu)
From: "Steven C. Bagley" <BAGLEY%MIT-OZ@MIT-MC.ARPA>
Subject: Quite more than you want to know about George Spencer Brown

Yes, Spencer Brown was associated with Russell, but since Lord Russell
died recently (1970), I think it safe to assume that not ALL of his
associates are dead, yet, at least.

There was a brief piece about Spencer Brown in "New Scientist" several
years ago (vol. 73, no. 1033, January 6, 1977, page 6).  Here are two
interesting quotes:

"What sets him apart from the many others who have claimed a proof of
the [four-color] theorem are his technique, and his personal style.
Spencer Brown's technique rests on a book he wrote in 1964 called
`Laws of Form.'  George Allen and Unwin published it in 1969, on the
recommendation of Bertrand Russell.  In the book he develops a new
algebra of logic -- from which the normal Boolean algebra (a means of
representing propositions and arguments with symbols) can be derived.
The book has had a mixed reputation, from `a work of genius' to
`pretentious triviality.'  It is certainly unorthodox, and mixes
metaphysics and mathematics.  Russell himself was taken with the work,
and mentions it in his autobiography....

The style of the man is extravagant -- he stays at the Savoy -- and
all-embracing.  He was in the Royal Navy in the Second World War; has
degrees in philosophy and psychology (but not mathematics); was a
lecturer in logic at Christ Church College, Oxford; wrote a treatise
on probability; a volume of poetry, and a novel; was a chief logic
designer with Mullard Equipment Ltd where his patented design of a
transistorised elevator logic circuit led to `Laws of Form'; has two
world records for gliding; and presently lectures part-time in the
mathematics department at the University of Cambridge while also
managing his publishing business."

I know of two reviews of "Laws of Form": one by Stafford Beer, the
British cyberneticist, which appeared in "Nature," vol. 223, Sept 27,
1969, and the other by Lancelot Law Whyte, which was published in the
British Journal of the Philosophy of Science, vol 23, 1972, pages
291-292.

Spencer Brown's probability work was published in a book called
"Probability and Scientific Inference", in the late 1950's, if my
memory serves me correctly.  There is also an early article in
"Nature" called "Statistical Significance in Psychical Research", vol.
172, July 25, 1953, pp. 154-156.  A comment by Soal, Stratton, and
Trouless on this article appeared in "Nature" vol 172, Sept 26, 1953,
page 594, and a reply by Spencer Brown immediately follows.  The first
sentence of the initial article reads as follows: "It is proposed to
show that the logical form of the data derived from experiments in
psychical research which depend upon statistical tests is such as to
provide little evidence for telepathy, clairvoyance, precognition,
psychokinesis, etc., but to give some grounds for questioning the
practical validity of the test of significance used."  Careful Spencer
Brown watchers will be interested to note that this article lists his
affliation as the Department of Zoology and Comparative Anatomy,
Oxford; he really gets around.

His works have had a rather widespread, if unorthodox, impact.
Spencer Brown and "Laws of Form" are mentioned in Adam Smith's Powers
of Mind, a survey of techniques for mind expansion, contraction,
adjustment, etc., e.g., EST, various flavors of hallucinogens, are
briefly noted in Aurthur Koestler's The Roots of Coincidence, which
is, quite naturally enough, about probability, coincidence, and
synchronicity, and are mentioned, again, in "The Dyadic Cyclone," by
Dr. John C. Lilly, dolphin aficionado, and consciousness expander,
extraordinaire.

If this isn't an eclectic enough collection of trivia about Spencer
Brown, keep reading.  Here is quote from his book "Only Two Can Play
This Game", written under the pseudonym of James Keys.  "To put it
bluntly, it looks as if the male is so afraid of the fundamentally
different order of being of the female, so terrified of her huge
magical feminine power of destruction and regeneration, that he
doesn't look at her as she really is, he is afraid to accept the
difference, and so has repressed into his unconscious the whole idea
of her as ANOTHER ORDER OF BEING, from whom he might learn what he
could not know of himself alone, and replaced her with the idea of a
sort of second-class replica of himself who, because she plays the
part of a man so much worse than a man, he can feel safe with because
he can despise her."

There are some notes at the end of this book (which isn't really a
novel, but his reflections, written in the heat of the moment, about
the breakup a love affair) which resemble parts of "Laws of Form":
"Space is a construct.  In reality there is no space.  Time is also a
construct.  In reality there is no time.  In eternity there is space
but no time.  In the deepest order of eternity there is no space....In
a qualityless order, to make any distinction at all is at once to
construct all things in embryo...."

And last, I have no idea of his present-day whereabouts.  Perhaps try
writing to him c/o Cambridge University.

------------------------------

Date: Thu, 16 Feb 84 13:58:28 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Quite more than you want to know about George Spencer Brown

Thank you for the copious information on G. S-B.  If I can't get in touch
with him now, it will be because he does not want to be found.

After the first reading of the first page of "The Laws of Form" I almost
threw the book away.  I am glad, however, that I didn't.  I have read it
several times and thought carefully about it and I think that there is much
substance to it.

  --Charlie

------------------------------

Date: 15 Feb 84  2302 PST
From: John McCarthy <JMC@SU-AI>
Subject: Serial or parallel

        It seems to me that introspection can tell us that the brain
does many things serially.  For example, a student with 5 problems
on an examination cannot set 5 processes working on them.  Indeed
I can't see that introspection indicates that anything is done
in parallel, although it does indicate that many things are done
subconsciously.  This is non-trivial, because one could imagine
a mind that could set several processes going subconsciously and
then look at them from time to time to see what progress they
were making.

        On the other hand, anatomy suggests and physiological
experiments confirm that the brain does many things in parallel.
These things include low level vision processing and probably
also low level auditory processing and also reflexes.  For example,
the blink reflex seems to proceed without thought, although it
can be observed and in parallel with whatever else is going on.
Indeed one might regard the blink reflex and some well learned
habits as counter-examples to my assertion that one can't set
parallel processes going and then observe them.

        All else seems to be conjecture.  I'll conjecture that
a division of neural activity into serial and parallel activities
developed very early in evolution.  For example, a bee's eye is
a parallel device, but the bee carries out long chains of serial
activities in foraging.  My more adventurous conjecture is that
primate level intelligence involves applying parallel pattern
recognition processes evolve in connection with vision to records
of the serial activities of the organism.  The parallel processes
of recognition are themselves subconscious, but the results have
to take part in the serial activity.  Finally, seriality seems
to be required for coherence.  An animal that seeks food by
locomotion works properly only if it can go in one direction
at a time, whereas a sea anemone can wave all its tentacles at
once and needs only very primitive seriality that can spread
in a wave of activity.

        Perhaps someone who knows more physiology can offer more
information about the division of animal activity into serial
and parallel kinds.

------------------------------

Date: Wed, 15 Feb 84 22:40:48 pst
From: finnca1%ucbtopaz.CC@Berkeley
Subject: Re:  "You cant go home again"
        Date:     Sun, 12 Feb 84 14:18:04 EST
        From: Brint <abc@brl-bmd>

        I couldn't agree more (with your feelings of regret at not
        capturing the expertise of the "oldster" in meterological
        lore).

        My dad was one of the best automotive diagnosticians in
        Baltimore [...]

Ah yes, the scarcest of experts these days:  a truly competent auto
mechanic!  But don't you still need an expert to PERCEIVE the subtle
auditory cues and translate them into symbolic form?

Living in the world is a full time job, it seems.

                Dave N. (...ucbvax!ucbtopaz!finnca1)

------------------------------

Date: Monday, 13 Feb 1984 18:37:35-PST
From: decwrl!rhea!glivet!zurko@Shasta
Subject: Re: The "world" of CS

        [Forwarded from the Human-Nets digest by Laws@SRI-AI.]

The best place for you to start would be with Sheri Turkle, a
professor at MIT's STS department.  She's been studying both the
official and unofficial members of the computer science world as a
culture/society for a few years now.  In fact, she's supposed to be
putting a book out on her findings, "The Intimate Machine".  Anyone
heard what's up with it?  I thought it was supposed to be out last
Sept, but I haven't been able to find it.
        Mez

------------------------------

Date: 14 Feb 84 21:50:52 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Learning Seminar

             [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                      MACHINE LEARNING BROWN BAG SEMINAR

Title:     When to Learn
Speaker:   Michael Sims
Date:      Wednesday, Feb. 15, 1984 - 12:00-1:30
Location:  Hill Center, Room 254 (note new location)

       In  this  informal  talk I will describe issues which I have broadly
    labeled  'when  to  learn'.    Most  AI  learning  investigations  have
    concentrated  on  the  mechanisms  of  learning.    In  part  this is a
    reasonable consequence of AI's close  relationship  with  the  'general
    process tradition' of psychology [1].  The influences of ecological and
    ethological   (i.e.,  animal  behavior)  investigations  have  recently
    challenged this research methodology in psychology, and I believe  this
    has important ramifications for investigations of machine learning.  In
    particular,  this  influence  would  suggest that learning is something
    which takes place when an appropriate environment  and  an  appropriate
    learning  mechanism  are  present,  and  that  it  is  inappropriate to
    describe learning by describing a learning mechanism without describing
    the environment in which it operates.  The most cogent new issues which
    arise are the description of the environment, and  the  confronting  of
    the  issue  of  'when  to learn in a rich environment'.   By a learning
    system in a 'rich environment' I  mean  a  learning  system  which must
    extract the items to be learned from sensory input which is too rich to
    be  exhaustively stored.  Most present learning systems operate in such
    a restrictive environment that there is no question of what or when  to
    learn.   I will also present a general architecture for such a learning
    system in a rich environment, called a Pattern Directed Learning Model,
    which was motivated by biological learning systems.


                                  References

[1]   Johnston, T. D.
      Contrasting approaches to a theory of learning.
      Behavioral and Brain Sciences 4:125-173, 1981.

------------------------------

Date: Wed 15 Feb 84 13:16:07-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: "Automatic deduction" and other stuff

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

A reminder that the seminar on automatic reasoning / theorem proving /logic
programming / mumble mumble mumble  which I advertised earlier is going to
begin shortly, under one title or another.   It will tentatively be on
Wednesdays at 1:30 in MJH301.   If you wish to be on the mailing list for this,
please mail to me or Yoni Malachi (YM@SAIL).   But if you are already on
Carolyn Talcott's mailing list for the MTC seminars, you will probably be
included on the new list unless you ask not to be.

For those interested specifically in the MRS system, we plan to continue MRS
meetings, also on Weds., at 10:30, starting shortly.   I expect to announce
such meetings on the MRSusers distribution list.   To get on this, mail to me
or Milt Grinberg (GRINBERG@SUMEX).   Note that MRSusers will contain other
announcements related to MRS as well.
                                                - Richard

------------------------------

Date: Wed 15 Feb 84
Subject: McCarthy Lectures on Commonsense Knowledge

      [Forwarded from the Stanford CSLI newsletter by Laws@SRI.]


   MCCARTHY LECTURES ON THE FORMALIZATION OF COMMONSENSE KNOWLEDGE

     John McCarthy  will  present  the remaining three lectures of his
series (the first of the four was held January 20) at 3:00 p.m. in the
Ventura Hall Seminar Room on the dates shown below.

Friday, Feb. 17   "The Circumscription Mode of Nonmonotonic Reasoning"

        Applications of circumscription to formalizing commonsense
        facts.  Application to the frame problem, the qualification
        problem, and to the STRIPS assumption.

Friday, March 2   "Formalization of Knowledge and Belief"

        Modal and first-order formalisms.  Formalisms in which possible
        worlds are explicit objects.  Concepts and propositions as
        objects in theories.

Friday, March 9   "Philosophical Conclusions Arising from AI Work"

        Approximate theories, second-order definitions of concepts,
        ascription of mental qualities to machines.

------------------------------

End of AIList Digest
********************

∂22-Feb-84  1758	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #21
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Feb 84  17:56:38 PST
Date: Wed 22 Feb 1984 16:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #21
To: AIList@SRI-AI


AIList Digest           Thursday, 23 Feb 1984      Volume 2 : Issue 21

Today's Topics:
  Waveform Analysis - EEG/EKG Request,
  Laws of Form - Comment,
  Review - Commercial NL Review in High Technology,
  Humor - The Adventures of Joe Lisp,
  Seminars - Computational Discovery & Robotic Planning & Physiological
    Reasoning & Logic Programming & Mathematical Expert System
----------------------------------------------------------------------

Date: Tue, 21 Feb 84 22:29:05 EST
From: G B Reilly <reilly@udel-relay.arpa>
Subject: EEG/EKG Scoring

Has anyone done any work on automatic scoring and interpretation of EEG or
EKG outputs?

Brendan Reilly

[There has been a great deal of work in these areas.  Good sources are
the IEEE pattern recognition or pattern recognition and image processing
conferences, IEEE Trans. on Pattern Analysis and Machine Intelligence,
IEEE Trans. on Computers, and the Pattern Recognition journal.  There
have also been some conferences on medical pattern recognition.  Can
anyone suggest a bibliography, special issue, or book on these subjects?
Have there been any AI (as opposed to PR) approaches to waveform diagnosis?
-- KIL]

------------------------------

Date: 19-Feb-84 02:14 PST
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: G. Spencer-Brown and the Laws of Form

I know of someone who talked with G. on the telephone about six years
ago somewhere in Northern California.  My friend developed a quantum
logic for expressing paradoxes, and some forms of schyzophrenia, among
other things.  Puts fuzzy set theory to shame.  Anyway, he wanted to
get together with G. to discuss his own work and what he perceived in
the Laws of Form as very fundamental problems in generality due to
over-simplicity.  G. refused to meet without being paid fifty or so
dollars per hour.

Others say that the LoF's misleading notation masks the absence of any
significant proofs.  They observe that the notation uses whitespace as
an implicit operator, something that becomes obvious in an attempt to
parse it when represented as character strings in a computer.

I became interested in the Laws of Form when it first came out as it
promised to be quite an elegant solution to the most obscure proofs of
Whitehead and Russell's Principia Mathematica.  The LoF carried to
perfection a very similar simplification I attempted while studying
the same logical foundations of mathematics.  One does not get too far
into the proofs before getting the distinct feeling that there has GOT
to be a better way.

It would be interesting to see an attempt to express the essence of
Go:del's sentence in the LoF notation.

 -- kirk

------------------------------

Date: Fri 17 Feb 84 10:57:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Commercial NL Review in High Technology

The February issue of High Technology has a short article on
natural language interfaces (to databases, mainly).  The article
and business outlook section mention four NL systems currently
on the market, led by AIC's Intellect ($70,000, IBM mainframes),
Frey Associate's Themis ($24,000, DEC VAX-11), and Cognitive
System's interface.  (The fourth is not named, but some OEMs and
licensees of the first two are given.)  The article says that
four more systems are expected out this year, and discusses
Symantec's system ($400-$600, IBM PC with 256 Kbytes and hard disk)
and Cal Tech's ASK (HP9836 micro, licensed to HP and DEC).

------------------------------

Date:    Tue, 14 Feb 84 11:21:09 EST
From:    Kris Hammond <Hammond@YALE>
Subject: *AI-LUNCH*

         [Forwarded from a Yale bboard by Shrager@CMU-PSY-A.]

                 THE ADVENTURES OF JOE LISP, T MAN

   Brought to you by:  *AI-LUNCH*, its hot,  its  cold,  its  more  than
   a lunch...
                      This week's episode:

                  The Case of the Bogus Expert
                            Part I

   It was  late  on  a  Tuesday and I was dead in my seat from nearly an
   hour of grueling mail reading and idle chit-chat with random  passers
   by.  The  only  light  in  my  office  was the soft glow from my CRT,
   the only sound was the pain wracked rattle of  an  over-heated  disk.
   It was  raining  out,  but  the  steady staccato rhythm that beat its
   way into the skulls of others was held  back  by  the  cold  concrete
   slabs of  my windowless walls.  I like not having windows, but that's
   another story.

   I didn't hear her come in, but when the  scent  of  her  perfume  hit
   me, my  head swung faster than a Winchester.  She was wearing My-Sin,
   a perfume with the smell of an expert, but that wasn't what impressed
   me.  What  hit  me  was  her  contours.   She had a body with all the
   right variables.  She wore a dress with a single closure that  barely
   hid the  dynamic  scoping  of  what  was  underneath.  Sure I saw her
   as an object, but I guess I'm just object oriented.   It's  the  kind
   of operator I am.

   After she sat down and began to tell her story I  realized  that  her
   sophisticated look  was  just  cover.  She was a green kid, still wet
   behind the ears.  In fact she was wet all over.  As I  said,  it  was
   raining outside.  It's an easy inference.

   It  seems  the  kid's  step-father  had  disappeared.   He had been a
   medical specialist,  diagnosis  and  prescription,  but  one  day  he
   started making  wild  claims  about  knowledge  and planning and then
   he  vanished.   I  had  heard  of  this  kind  before.    Some   were
   specialists.  Some  in  medicine,  some  in geology, but all were the
   same kind of guy.  I looked the girl in the eye  and  asked  the  one
   question she  didn't  want  to  hear,  "He's  rule-based, isn't he?".

   She turned  her  head away and that was all the answer I needed.  His
   kind were cold, unfeeling, unchanging, but she still  loved  him  and
   wanted him back again.

   Once I  got  a  full  picture of the guy I was sure that I knew where
   to find him, California.  It was the haven for his  way  of  thinking
   and acting.   I  was  sure  that he had been swept up by the EXPERTS.
   They were a cult that had grown up in the past few  years,  promising
   fast and  easy  enlightenment.   What  they  didn't tell you was that
   the price was your ability  to  understand  itself.   He  was  there,
   as sure as I was a T Man.

   I knew of at least one operative in California who could be  trusted,
   and I  knew  that  I had to talk to him before I could do any further
   planning.  I reached for the phone and gave him a call.

   The conversation was short and  sweet.   He  had  resource  conflicts
   and couldn't  give  me  a  hand  right now.  I assumed that it had to
   be more complex than that and almost  said  that  resource  conflicts
   aren't  that  easy  to  identify,  but  I  had no time to waste on in
   fighting while the real enemy was still at  large.   Before  he  hung
   up, he  suggested  that  I pick up a radar detector if I was planning
   on driving out and asked if I could grab a half-gallon  of  milk  for
   him on  the  way.   I agreed to the favor, thanked him for his advice
   and wished him luck on his tan...

    That's all  for  now  kids.   Tune in next week for the part two of:

                  The Case of the Bogus Expert

                            Starring

                        JOE LISP, T MAN

   And remember kids, Wednesdays are *AI-LUNCH* days and  11:45  is  the
   *AI-LUNCH* time.  And kids, if you send in 3 box tops from *AI-LUNCH*
   you can get a JOE LISP magic decoder ring.  This  is  the  same  ring
   that saved  JOE  LISP only two episodes ago and is capable of parsing
   from surface to deep  structure  in  less  than  15  transformations.
   Its part plastic, part metal and all bogus, so order now.

------------------------------

Date: 17 February 1984 11:55 EST
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: Computational Discovery of Mathamatical Laws

          [Forwarded from the MIT-MC bboard by Laws@SRI-AI.]

TITLE:  "The Computational Discovery of Mathematical Laws: Experiments in Bin
           Packing"
SPEAKER:        Dr. Jon Bentley, Bell Laboratories, Murray Hill
DATE:           Wednesday, February 22, 1984
TIME:           3:30pm  Refreshments
                4:15pm  Lecture
PLACE:          Bldg. 2-338


Bin packing is a typical NP-complete problem that arises in many applications.
This talk describes experiments on two simple bin packing heuristics (First Fit
and First Fit Decreasing) which show that they perform extremely well on
randomly generated data.  On some natural classes of inputs, for instance, the
First Fit Decreasing heuristic finds an optimal solution more often than not.
The data leads to several startling conjectures; some have been proved, while
others remain open problems.  Although the details concern the particular
problem of bin packing, the theme of this talk is more general: how should
computer scientists use simulation programs to discover mathematical laws?
(This work was performed jointly with D.S. Johnson, F.T. Leighton and C.A.
McGeoch.  Tom Leighton will give a talk on March 12 describing proofs of some
of the conjectures spawned by this work.)

HOST:   Professor Tom Leighton

THIS SEMINAR IS JOINTLY SPONSORED BY THE COMBINATORICS SEMINAR & THE THEORY OF
COMPUTATION SEMINAR

------------------------------

Date: 17 Feb 1984  15:14 EST (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Revolving Seminar

[Forwarded from the MIT-OZ bboard by SASW@MIT-MC.]

[I am uncertain as to the interest of AIList readers in robotics,
VLSI and CAD/CAM design, graphics, and other CS-related topics.  My
current policy is to pass along material relating to planning and
high-level reasoning.  Readers with strong opinions for or against
such topics should write to AIList-Request@SRI-AI.  -- KIL]


AUTOMATIC SYNTHESIS OF FINE-MOTION STRATEGIES FOR ROBOTS

Tomas Lozano Perez

The use of force-based compliant motions enables robots to carry out
tasks in the presence of significant sensing and control errors.  It
is quite difficult, however, to discover a strategy of such motions to
achieve a task.  Furthermore, the choice of motions is quite sensitive
to details of geometry and to error characteristics.  As a result,
each new task presents a brand new and difficult problem.  These
factors motivate the need for automatic synthesis for compliant
motions.  In this talk I will describe a formal approach to the
synthesis of compliant motion strategies from geometric description of
assembly operations.

(This is joint work [no pun intended -- KIL] with Matt Mason of CMU
and Russ Taylor of IBM)

------------------------------

Date: Fri 17 Feb 84 09:02:29-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral

             [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                                  PH.D. ORAL

                        USE OF ARTIFICIAL INTELLIGENCE
                            AND SIMPLE MATHEMATICS
                       TO ANALYZE A PHYSIOLOGICAL MODEL

                    JOHN C. KUNZ, STANFORD/INTELLIGENETICS

                               23 FEBRUARY 1984

                  MARGARET JACKS HALL, RM. 146, 2:30-3:30 PM


   The objective of this research is to demonstrate a methodology for design
and use of a physiological model in a computer program that suggests medical
decisions.  This methodology uses a physiological model based on first
principles and facts of physiology and anatomy.  The model includes inference
rules for analysis of causal relations between physiological events.  The model
is used to analyze physiological behavior, identify the effects of
abnormalities, identify appropriate therapies, and predict the results of
therapy.  This methodology integrates heuristic knowledge traditionally used in
artificial intelligence programs with mathematical knowledge traditionally used
in mathematical modeling programs.  A vocabulary for representing a
physiological model is proposed.

------------------------------

Date: Tue 21 Feb 84 10:47:50-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: ANNOUNCEMENT

[Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


Thursday, February 23, 1984

Professor Kenneth Kahn
Upssala University

will give a talk:

"Logic Programming and Partial Evaluation as Steps Toward
 Efficient Generic Programming"

at: Bldg. 200, (History Building), Room 107, 12 NOON

PROLOG and extensions to it embedded in LM PROLOG will be presented as
a means of describing programs that can be used in many ways.  Partial
evaluation  is  a  process  that  automatically  produces   efficient,
specialized versions  of programs.   Two partial  evaluators, one  for
LISP and one for PROLOG, will be presented as a means for winning back
efficiency that  was sacrificed  for generality.   Partial  evaluation
will also be presented as a means of generating compilers.

------------------------------

Date: 21 Feb 84 15:27:53 EST
From: DSMITH@RUTGERS.ARPA
Subject: Rutger's University Computer Science Colloquium

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                            COLLOQUIUM

                    Department of Computer Science


         SPEAKER:   John Cannon
                    Dept. of Math
                    University of Sydney
                    Syndey, AUSTRIA

         TITLE:    "DESIGN AND IMPLEMENTATION OF A PROGRAMMING
                    LANGUAGE/EXPERT SYSTEMS FOR MODERN ALGEBRA"

                                  Abstract

Over the past 25 years a substantial body of algorithms has been
devised for computing structural information about graphs.  In order
to make these techniques more generally available, I have undertaken
the development of a system for group theory and related areas of
algebra.  The system consists of a high-level language (having a
Pascal-like syntax) supported by an extensive library.  In that the
system attempts to plan, at a high level, the most economical solution
to a problem, it has some of the attributes of an expert system.  This
talk will concentrate on (a) the problems of designing appropriate
syntax for algebra and, (b) the implementation of a language professor
which attempts to construct a model of the mathematical microworld
with which it is dealing.

          DATE:  Friday, February 24, 1984
          TIME:  2:50 p.m.
          PLACE: Hill Center - Room 705
               * Coffee served at 2:30 p.m. *

------------------------------

End of AIList Digest
********************

∂29-Feb-84  1547	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #22
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Feb 84  15:47:19 PST
Date: Wed 29 Feb 1984 13:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #22
To: AIList@SRI-AI


AIList Digest           Wednesday, 29 Feb 1984     Volume 2 : Issue 22

Today's Topics:
  Robotics - Personal Robotics Request,
  Books - Request for Laws of Form Review,
  Expert Systems - EURISKO Information Request,
  Automated Documentation Tools - Request,
  Mathematics - Fermat's Last Theorem & Map Coloring,
  Waveform Analysis - EEG/EKG Interpretation,
  Brain Theory - Parallelism,
  CS Culture - Computing Worlds
----------------------------------------------------------------------

Date: Thu 16 Feb 84 17:59:03-PST
From: PIERRE@SRI-AI.ARPA
Subject: Information about personal robots?

   Do you know anything about domestic robots? personal robots?
I'm interested by the names and adresses of companies, societies,
clubs, universities involved in that field. Does there exist any review
about this? any articles? Do you work or have you heard of any projects
in this field?
Thank you to answer at Pierre@SRI-AI.ARPA

          Pierre

------------------------------

Date: 23 Feb 84 13:58:28 PST (Thu)
From: Carl Kaun <ckaun@aids-unix>
Subject: Laws of Form


I hope that Charlie Crummer will share some of the substance he finds in
"Laws of Form" with us (ref AIList Digest V2 #20).  I myself am more in the
group that does not understand what LoF has to say that is new, and indeed
doubt that it does say anything unique.

------------------------------

Date: Fri, 24 Feb 84 15:32 MST
From: RNeal@HIS-PHOENIX-MULTICS.ARPA
Subject: EURISKO

I have just begun reading the AI digests (our copy starts Nov 3 1983)
and I am very interested in the one or two transactions dealing with
EURISKO.  Could someone explain what EURISKO does, and maybe give some
background of its development?

On a totally different note, has anyone done any AI work on lower order
intelligence (ie.  that using instinct) such as insects, reptiles, etc.?
Seems they would be easier to model, and I just wondered if anyone had
attempted to make a program which learns they way they do and the things
they do .  I don't know if this belongs in AI or some simulation meeting
(is there one?).
                      >RUSTY<

------------------------------

Date: 27 Feb 1984 07:26-PST
From: SAC.LONG@USC-ISIE
Subject: Automated Documentation Tools

Is anyone aware of software packages available that assist in the
creation of documentation of software, such as user manuals and
maintenance manuals?  I am not looking for simple editors which
are used to create text files, but something a little more
sophisticated which would reduce the amount of time one must
invest in creating manuals manually (with the aid of a simple editor).
If anyone has information about such, please send me a message at:

     SAC.LONG@USC-ISIE

or   Steve Long
     1018-1 Ave H
     Plattsmouth NE 68048

or   (402)294-4460 or reply through AIList.

Thank you.

  --  Steve

------------------------------

Date: 16 Feb 84 5:36:12-PST (Thu)
From: decvax!genrad!wjh12!foxvax1!minas @ Ucb-Vax
Subject: Re: Fermat's Last Theorem & Undecidable Propositions
Article-I.D.: foxvax1.317

Could someone please help out an ignorant soul by posting a brief (if that
is, indeed, possible!) explanation of what Fermat's last theorem states as
well as what the four-color theorem is all about.  I'm not looking for an
explanation of the proofs, but, simply, a statement of the propositions.

Thanks!

-phil minasian          decvax!genrad!wjh12!foxvax1!minas

------------------------------

Date: 15 Feb 84 20:15:33-PST (Wed)
From: ihnp4!mit-eddie!rh @ Ucb-Vax
Subject: Re: Four color...
Article-I.D.: mit-eddi.1290

I had thought that 4 color planar had been proved, but that
the "conjectures" of 5 colors for a sphere and 7 for a torus
were still waiting.  (Those numbers are right, aren't they?)

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: 17 Feb 84 21:33:46-PST (Fri)
From: decvax!dartvax!dalcs!holmes @ Ucb-Vax
Subject: Re: Four color...
Article-I.D.: dalcs.610

        The four colour problem is the same for a sphere as it is
for the infinite plane.  The problem for a torus was solved many
years ago.  The torus needs exactly 7 colours to paint it.

                                        Ray

------------------------------

Date: 26 Feb 1984 21:38:16-PST
From: utcsrgv!utai!tsotsos@uw-beaver
Subject: AI approach to ECG analysis

One of my PhD students, Taro Shibahara, has been working on an expert
system for arrhythmia analysis. The thesis should be finished by early summer.
A preliminary paper discussing some design issues appeared in IJCAI-83.
System name is CAA - Causal Arrhythmia Analyzer. Important contributions:
Two distinct KB's, one of signal domain the other of the electrophysiological
domain, communication via a "projection" mechanism, causal relations to assist
in prediction, use of meta-knowledge within a frame-based representation
for statistical knowledge. The overall structure is based on the
ALVEN expert system for left ventricular performance assessment, developed
here as well.

John Tsotsos
Dept. of Computer Science
University of Toronto

[Ray Perrault <RPERRAULT@SRI-AI> also suggested this lead.  -- KIL]

------------------------------

Date: 24 Feb 84 10:07:36-PST (Fri)
From: decvax!mcnc!ecsvax!jwb @ Ucb-Vax
Subject: computer ECG
Article-I.D.: ecsvax.2043

At least three companies are currently marketing computer ECG analysis
systems.  They are Marquette Electronics, IBM, Hewlett-Packard.  We use the
Marquette system which works quite well.  Marquette and IBM use variants of
the same program (the "Bonner" program below, original development funded by
IBM.)  Apparently because of fierce competition, much current information,
particularly with regard to algorithms, is proprietary.  Worst in this regard
(a purely personal opinion) is HP who seems to think nobody but HP needs to
know how they do things and physicians are too dumb to understand anyway.
Another way hospitals get computer analysis of ECG's is through "Telenet" who
offers telephone connection to a time sharing system (I think located in the
Chicago area).  Signals are digitized and sent via a modem through standard
phone lines.  ECG's are analyzed and printed information is sent back.
Turn-around time is a few minutes.  They offer an advantage to small hospitals
by offering verification of the analysis by a Cardiologist (for an extra fee).
I understand this service has had some financial problems (rumors).

Following is a bibliography gathered for a lecture to medical students about
computer analysis of ECG's.  Because of this it is mainly from more or less
clinical literature and is oriented toward methods of validation (This is
tough, because reading of ECG's by cardiologists, like many clinical
decisions, is partly a subjective process.  The major impact of these systems
so far has been to force the medical community to develop objective criteria
for their analysis.)

                                 BIBLIOGRAPHY
                  Computer Analysis of the Electrocardiogram
                               August 29, 1983

BOOK

Pordy L (1977) Computer electrocardiography:  present status and criteria.
Mt. Kisco, New York, Futura

PAPERS

Bonner RE, Crevasse L, Ferrer MI, Greenfield JC Jr (1972) A new computer
program for analysis of scalar electrocardiograms.  Computers and Biomedical
Research 5:629-653

Garcia R, Breneman GM, Goldstein S (1981) Electrogram computer analysis.
Practical value of the IBM Bonner-2 (V2MO) program.  J. Electrocardiology
14:283-288

Rautaharju PM, Ariet M, Pryor TA, et al. (1978)  Task Force III:  Computers in
diagnostic electrocardiography.  Proceedings of the Tenth Bethesda Conference,
Optimal Electrocardiography.  Am. J. Cardiol. 41:158-170

Bailey JJ et al (1974) A method for evaluating computer programs for
electrocardiographic interpretation

I.  Application to the experimental IBM program of 1971.  Circulation 50:73-79

II.  Application to version D of the PHS program and the Mayo Clinic program
of 1968.  Circulation 50:80-87

III.  Reproducibility testing and the sources of program errors.  Circulation
50:88-93

Endou K, Miyahara H, Sato (1980) Clinical usefulness of computer diagnosis in
automated electrocardiography.  Cardiology 66:174-189

Bertrand CA et al (1980) Computer interpretation of electrocardiogram using
portable bedside unit.  New York State Journal of Medicine.  August
1980(?volume):1385-1389

Jack Buchanan
Cardiology and Biomedical Engineering
University of North Carolina at Chapel Hill
(919) 966-5201

decvax!mcnc!ecsvax!jwb

------------------------------

Date: Friday, 24-Feb-84 18:35:44-GMT
From: JOLY G C QMA (on ERCC DEC-10) <GCJ%edxa@ucl-cs.arpa>
Subject: re: Parallel processing in the brain.

To compare the product of millions of years of evolution
(ie the human brain) with the recent invention of parallel
processors seems to me to be like trying to effect an analysis
of the relative properties of chalk and cheese.
Gordon Joly.

------------------------------

Date: Wed, 29 Feb 84 13:17:04 PST
From: Dr. Jacques Vidal <vidal@UCLA-CS>
Subject: Brains: Serial or Parallel?


Is the brain parallel?  Or is the issue a red herring?

Computing and thinking are physical processes and as all physical
processes unfold in time are ultimately SEQUENTIAL even "continu-
ous" ones although the latter are self-timed (free-running, asyn-
chronous) rather than clocked.

PARALLEL means that there are multiple tracks with similar  func-
tions  like availability of multiple processors or multiple lanes
on a superhighway. It is a structural characteristic.

CONCURRENT means simultaneous. It is a temporal characteristic.

REDUNDANT means that there is  structure  beyond  that  which  is
minimally  needed  for  function,  perhaps to insure integrity of
function under perturbations.

In this context, PARALLELISM,  i.e. the deployment  of  multiple
processors  is the currency with which a system designer may pur-
chase these two commodities: CONCURRENCY and REDUNDANCY (a neces-
sary but not sufficient condition).

Turing machines have zero  concurrency.  Almost  everything  else
that  computes exhibit some. Conventional processor architectures
and  memories  are  typically  concurrent  at  the  word   level.
Microprogram are sequences of concurrent gate events.

There exist systems that are   completely  concurrent  and  free-
running.   Analog computers and combinational logic circuits have
these properties.  There, computation progresses by chunk between
initial  and final states.  A new chunk starts when the system is
set to a new initial state.

Non-von architectures have moved away from single track computing
and  from  the linear organization of memory cells. With cellular
machines another property appears: ADJACENCY. Neighboring proces-
sors use adjacency as a form of addressing.

These concepts are applicable to natural  automata:  Brains  cer-
tainly  employ  myriads  of  processors  and thus exhibit massive
parallelism. From the numerous processes that are  simultaneously
active  (autonomous  as well as deliberate ones) it is clear that
brains utilize unprecedented concurrency.  These  proces-
sors  are   free-running.   Control  and  data flows are achieved
through three-dimensional networks. Adjacency is a key feature in
most  of the brain processes that have been identified. Long dis-
tance communication is provided for by millions of parallel path-
ways, carrying highly redundant messages.

Now introspection indicates that conscious thinking is limited to
one  stream of thought at any given time. That is a limitation of
the mechanisms supporting consciousness amd some will claim  that
it  can be overcome. Yet even a single stream of thinking is cer-
tainly supported  by  many  concurrent  processes,  obvious  when
thoughts are spoken, accompanied by gestures etc...

Comments?

------------------------------

Date: 18 Feb 1984 2051-PST
From: Rob-Kling <Kling%UCI-20B%UCI-750a@csnet2>
Subject: Computing Worlds

          [Forwarded from Human-Nets Digest by Laws@SRI-AI.]

Sherry Turkle is coming out with a book that may deal in part with the
cultures of computing worlds. It also examines questions about how
children come to see computer applications as alive, animate, etc.

It was to be called, "The Intimate Machine." The title was
appropriated by Neil Frude who published a rather superficial book
with an outline very similar to that Turkle proposed to
some publishers. Frude's book is published by New American Library.

Sherry Turkle's book promises to be much deeper and careful.
It is to be published by Simon and Schuster  under a different
title.

Turkle published an interesting article
called, "Computer as Rorschach" in Society 17(2)(Jan/Feb 1980).

This article examines the variety of meanings that people
attribute to computers and their applications.

I agree with Greg that computing activities are embedded within rich
social worlds. These vary. There are hacker worlds which differ
considerably from the worlds of business systems analysts who develop
financial applications in COBOL on IBM 4341's.  AI worlds differ from
the personal computing worlds, and etc.  To date, no one appears to
have developed a good anthropological account of the organizing
themes, ceremonies, beliefs, meeting grounds, etc.  of these various
computing worlds.  I am beginning such a project at UC-Irvine.

Sherry Turkle's book will be the best contribution (that I know of) in
the near future.

One of my colleagues at UC-Irvine, Kathleen Gregory, has just
completed a PhD thesis in which she has studied the work cultures
within a major computer firm.  She plans to transform her thesis into
a book.  Her research is sensitive to the kinds of langauage
categories Greg mentioned.  (She will joining the Department of
Information and Computer Science at UC-Irvine in the Spring.)

Also, Les Gasser and Walt Scacchi wrote a paper on personal computing
worlds when they were PhD students at UCI.  It is available for $4
from:

        Public Policy Research Organization
        University of California,  Irvine
        Irvine,Ca. 92717

(They are now in Computer Science at USC and may provide copies upon
request.)


Several years ago I published two articles which examine some of the
larger structural arrangments in computing worlds:

        "The Social Dynamics of Technical Innovation in the
Computing World" ↑&Symbolic Interaction\&,
1(1)(Fall 1977):132-146.


        "Patterns of Segmentation and Intersection in the
Computing World"
↑&Symbolic Interaction\& 1(2)(Spring 1978): 24-43.

One section of a more recent article,
        "Value Conflicts in the Deployment of Computing Applications"
↑&Telecommunications Policy\& (March 1983):12-34.
examines the way in which certain computer-based technologies
such as automated offices, artificial intelligence,
CAI, etc. are the foci of social movements.


None of my papers examine the kinds of special languages
which Greg mentions. Sherry Turkle's book may.
Kathleen Gregory's thesis does, in the special setting of
one major computing vendor's software culture.

I'll send copies of my articles on request if I recieve mailing
addresses.


Rob Kling
University of California, Irvine

------------------------------

End of AIList Digest
********************

∂29-Feb-84  1645	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #23
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Feb 84  16:44:59 PST
Date: Wed 29 Feb 1984 14:11-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #23
To: AIList@SRI-AI


AIList Digest            Thursday, 1 Mar 1984      Volume 2 : Issue 23

Today's Topics:
  Seminars - VLSI Knowledge Representation
    & Machine Learning
    & Computer as Musical Scratchpad
    & Programming Language for Group Theory
    & Algorithm Animation
  Conference - Very Large Databases Call for Papers
----------------------------------------------------------------------

Date: Wed 22 Feb 84 16:36:20-PST
From: Joseph A. Goguen <GOGUEN@SRI-AI.ARPA>
Subject: Hierarchical Software Processor

                     [Forwarded by Laws@SRI-AI.]

                          An overview of HISP
                           by K. Futatsugi

               Special Lecture at SRI, 27 February 1984


    HISP (hierarchical software processor) is an experimental
language/system, which has been developed at ETL (Electrotechnical
Laboratory, Japan) by the author's group, for hierarchical software
development based on algebraic specification techniques.
    In HISP, software development is simply modeled as the incremental
construction of a set of hierarchically structured clusters of
operators (modules).  Each module is the constructed as a result of
applying one of the specific module building operations to the already
existing modules.  This basic feature makes it possible to write
inherently hierarchical and modularized software.
    This talk will inroduce HISP informally by the use of simple
examples.  The present status of HISP implementation and future
possibilities will also be sketched.

------------------------------

Date: Thu 23 Feb 84 00:26:45-MST
From: Subra <Subrahmanyam@UTAH-20.ARPA>
Subject: Very High Level Silicon Compilation

    [Forwarded by Laws@SRI-AI.  This talk was presented at the SRI
                    Computer Science Laboratory.]


           VERY HIGH LEVEL SILICON COMPILATION: THEORY AND PRACTICE

                               P.A.Subrahmanyam
                        Department of Computer Science
                              University of Utah

The  possibility  of  implementing  reasonably  complex special purpose systems
directly in silicon using VLSI technologies has served to  underline  the  need
for design methodologies that support the development of systems that have both
hardware  and  software  components.    It  is  important  in  the long run for
automated design aids that support such methodologies to be based on a  uniform
set  of  principles  --  ideally,  on  a  unifying  theoretical basis.  In this
context, I have been investigating a general framework to support the  analytic
and synthetic tasks of integrated system design. Two of the salient features of
this basis are:

   - The  formalism  allows  various levels of abstraction involved in the
     software/hardware design  process  to  be  modelled.    For  example,
     functional  (behavioral),  architectural  (system  and  chip  level),
     symbolic  layout,  and  electrical  (switch-level)--  are  explicitly
     modelled  as  being  typical  of the levels of abstraction that human
     "expert designers" work with.

   - The  formalism  allows  for  explicit  reasoning  about   behavioral,
     spatial, temporal and performance criteria.

The  talk  will  motivate  the  general  problem,  outline  the  conceptual and
theoretical basis, and discuss some of our preliminary  empirical  explorations
in building integrated software-hardware systems using these principles.

------------------------------

Date: 22 Feb 84 12:19:09 EST
From: Giovanni <Bresina@RUTGERS.ARPA>
Subject: Machine Learning Seminar

              [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

             *** MACHINE LEARNING SEMINAR AND PIZZA LUNCHEON ***


    Empirical Exploration of Problem Reformulation and Strategy Acquisition

Authors: N.S. Sridharan and J.L. Bresina
Location: Room 254, Hill Center, Busch Campus, Rutgers
Date: Wednesday, February 29, 1984
Time: Noon - 1:30 pm
Speaker: John L. Bresina

The  problem  solving  ability  of an AI program is critically dependent on the
nature of the symbolic  formulation  of  the  problem  given  to  the  program.
Improvement  in  performance  of  the  problem  solving  program can be made by
improving the strategy of controlling and directing search but more importantly
by shifting the problem formulation to a more appropriate form.

The choice of the initial formulation is critical, since  certain  formulations
are  more  amenable  to  incremental  reformulations than others.  With this in
mind,  an  Extensible  Problem  Reduction  method  is  developed  that   allows
incremental  strategy  construction.    The class of problems of interest to us
requires dealing with interacting subgoals.  A variety  of  reduction  operator
types   are   introduced  corresponding  to  different  ways  of  handling  the
interaction among subgoals.  These reduction  operators  define  a  generalized
And/Or  space including constraints on nodes with a correspondingly generalized
control structure for dealing with constraints and for combining  solutions  to
subgoals.    We  consider a modestly complex class of board puzzle problems and
demonstrate, by example, how reformulation of the problem can be carried out by
the construction and modification of reduction operators.

------------------------------

Date: 26 Feb 84 15:16:08 EST
From: BERMAN@RU-BLUE.ARPA
Subject: Seminar: The Computer as Musical Scratchpad

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

SEMINAR: THE COMPUTER AS MUSICAL SCRATCHPAD

Speaker: David Rothenburg, Inductive Inference, Inc.
Date:   Monday, March 5, 1984
Place:  CUNY Graduate Center, 33 West 42nd St., NYC
Room:   732
Time:   6:30 -- 7:30 p.m.

        The composer can use a description language wherein only those
properties and relations (of and between protions of the musical
pattern) which he judges significant need be specified.  Parameters of
these unspecified properties and relations are assigned at random.  It
is intended that this description of the music be refined in response
to iterated auditions.

------------------------------

Date: Sun 26 Feb 84 17:06:23-CST
From: Bob Boyer <CL.BOYER@UTEXAS-20.ARPA>
Subject: A Programming Language for Group Theory (Dept. of Math)

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

            DEPARTMENT OF MATHEMATICS COLLOQUIUM
          A Programming Language for Group Theory
                        John Cannon
        University of Sydney and Rutgers University
                 Monday, February 27, 4pm

     The past 25 years has seen the emergence of a small but vigorous branch of
group theory which is concerned with the discovery and implementation of
algorithms for computing structural information about both finite and infinite
groups.  These techniques have now reached the stage where they are finding
increasing use both in group theory research and in its applications.  In order
to make these techniques more generally available, I have undertaken the
development of what in effect is an expert system for group theory.

     Major components of the system include a high-level user language (having
a Pascal-like syntax) and an extensive library of group theory algorithms.  The
system breaks new ground in that it permits efficient computation with a range
of different types of algebraic structures, sets, sequences, and mappings.
Although the system has only recently been released, already it has been
applied to problems in topology, algebraic number theory, geometry, graphs
theory, mathematical crystalography, solid state physics, numerical analysis
and computational complexity as well as to problems in group theory itself.

------------------------------

Date: 27 Feb 1984 2025-PST (Monday)
From: Forest Baskett <decwrl!baskett@Shasta>
Subject: EE380 - Wednesday, Feb. 29 - Sedgewick on Algorithm Animation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

EE380 - Computer Systems Seminar
Wednesday, February 29, 4:15 pm
Terman Auditorium

                        Algorithm Animation
                          Robert Sedgewick
                          Brown University

    The central thesis of this talk is that it is possible to expose
fundamental characteristics of computer programs through the use of
dynamic (real-time) graphic displays, and that such algorithm animation
has the potential to be useful in several contexts.  Recent research in
support of this thesis will be described, including the development of
a conceptual framework for the process of animation, the implementation
of a software environment on high-performance graphics-based
workstations supporting this activity, and the use of the system as a
principal medium of communication in teaching and research.  In
particular, we have animated scores of numerical, sorting, searching,
string processing, geometric, and graph algorithms.  Several examples
will be described in detail.

[Editorial remark: This is great stuff.  - Forest]

------------------------------

Date: 23 Feb 84 16:32:24 PST (Thu)
From: Gerry Wilson <wilson@aids-unix>
Subject: Conference Call for Papers


                        CALL  FOR PAPERS
                        ================

               10'th International Conference on

                    Very Large Data Bases


The tenth VLDB conference is dedicated to the identification and
encouragement of research, development, and application of
advanced technologies for management of large data bases.  This
conference series provides an international forum for the promotion
of an understanding of current research; it facilitates the exchange
of experiences gained in the design, construction and use of data
bases; it encourages the discussion of ideas and future research
directions.  In this anniversary year, a special focus is the
reflection upon lessons learned over the past ten years and the
implications for future research and development.  Such lessons
provide the foundation for new work in the management of large
data bases, as well as the merging of data bases, artificial
intelligence, graphics, and software engineering technologies.

TOPICS:

Data Analysis and Design           Intelligent Interfaces
    Multiple Data Types                User Models
    Semantic Models                    Natural Language
    Dictionaries                       Knowledge Bases
                                       Graphics
Performance and Control
    Data Representation            Workstation Data Bases
    Optimization                       Personal Data Mangement
    Measurement                        Development Environments
    Recovery                           Expert System Applications
                                       Message Passing Designs
Security
    Protection                     Real Time Systems
    Semantic Integrity                 Process Control
    Concurrency                        Manufacturing
                                       Engineering Design
Huge Data Bases
    Data Banks                     Implementation
    Historical Logs                    Languages
                                       Operating Systems
                                       Multi-Technology Systems

Applications                       Distributed Data Bases
    Office Automation                  Distribution Management
    Financial Management               Heterogeneous and Homogeneous
    Crime Control                      Local Area Networks
    CAD/CAM

Hardware
    Data Base Machines
    Associative Memory
    Intelligent Peripherals


LOCATION:  Singapore
DATES:     August 29-31, 1984

TRAVEL SUPPORT: Funds will be available for partial support of most
                participants.

HOW TO SUBMIT:  Original full length (up to 5000 words) and short (up
  to 1000 words) papers are sought on topics such as those above.  Four
  copies of the submission should be sent to the US Program Chairman:

       Dr. Umeshwar Dayal
       Computer Corporation of America
       4 Cambridge Center
       Cambridge, Mass. 02142
       [Dayal@CCA-UNIX]

IMPORTANT DATES:    Papers Due:         March 15, 1984
                    Notification:       May 15, 1984
                    Camera Ready Copy:  June 20, 1984

For additional information contact the US Conference Chairman:

      Gerald A. Wilson
      Advanced Information & Decision Systems
      201 San Antonio Circle
      Suite 286
      Mountain View, California  94040
      [Wilson@AIDS]

------------------------------

End of AIList Digest
********************

∂06-Mar-84  1159	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #24
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 84  11:59:24 PST
Date: Tue  6 Mar 1984 10:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #24
To: AIList@SRI-AI


AIList Digest            Tuesday, 6 Mar 1984       Volume 2 : Issue 24

Today's Topics:
  Conferences - AAAI-84 Paper Submission Deadline,
  AI Tools - LISP for IBM PC & UNIX VAX Tools,
  Manual Generators - Replys,
  Parser Generator - Request,
  Mathematics - Fermat's Last Theorem & Map Coloring,
  Personal Robotics - Reply,
  Waveform Analysis - ECG Systems & Validation,
  Review - U.S. Response to Japan's AI efforts
----------------------------------------------------------------------

Date: Wed 29 Feb 84 15:44:06-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: AAAI-84 Paper Submission Deadline


*******  AAAI-84 PAPER SUBMISSION DEADLINE IS APRIL 2, 1984  *******


The SIGART Newsletter (No. 87, January 1984) has mistakenly published
two conflicting dates for submission of papers to AAAI-84.  Please note
that papers must be received in the AAAI Office in Menlo Park, CA, on or
before April 2, 1984.  This is the date that appears in the AAAI-84 Call
for Papers (printed on page 17 of the above-mentioned Newsletter).  The
date printed in the "Calendar" section on page 1 of the Newsletter is
incorrect.

Thank you,

Ron Brachman, Program Chair
Claudia Mazzetti, AAAI Executive Director

------------------------------

Date: Sun 4 Mar 84 13:33:49-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: LISP for IBM PC

I asked the list a while back about implementations of LISPs for
IBM PC's. I got a pointer for IQLISP, but seem to have misplaced
the pertinent info on how to order it. Can anyone supply this?

If you have any other implementations, I'll be glad to pass any
reviews back to the list.

--ted

[The original message must have been prior to issue 53, and I
don't have it online.  Does some have the address handy?  -- KIL]

------------------------------

Date: 27 Feb 84 16:26:42-PST (Mon)
From: ihnp4!houxm!hou2a!zev @ Ucb-Vax
Subject: AI (LISP,PROLOG,ETC.) for UNIX VAX
Article-I.D.: hou2a.269

A friend of mine is looking for a LISP, PROLOG, and/or
any other decent Artificial Intelligence system that
will run on a VAX under UNIX.

Please send replies directly to Mr. Leonard Brandwein at
aecom!brandw

He asked me to post this as a favor, since he does not
have direct access to the net.

In the likely case that you don't have a direct path
to aecom, here is one that will get you there from
any machine that can reach houxm:

houxm!hou2a!allegra!philabs!aecom!brandw

Of course, you can shorten the path if you can reach
any of the intermediate machines directly.

Thank you very much.

Zev Farkas  hou2a!zev  201 949 3821

[When sending to Usenet from the Arpanet, be sure to put double quotes
around all of the address prior to the @-sign.  Readers who want help
getting messages through the gateways should contact AIList-Request@SRI-AI.
Useful summaries or interesting replys may be published directly in
AIList, of course.  I will pass along some information about CProlog
in the next issue.  -- KIL]

------------------------------

Date: Thu, 1 Mar 84 5:12:55 EST
From: Stephen Wolff <steve@brl-bmd>
Subject: Re:  AI (LISP,PROLOG,ETC.) for UNIX VAX

     [Forwarded from the Info-Unix distribution by Laws@SRI-AI.]

Franz Lisp comes with Berkeley UNIX.  Interlisp is available.  Also T.
CProlog is available from Edinburgh.  You can get Rosie from RAND.
And these are just basics.  There's LOTS!  There are many schools out there
who are (possibly newly) in the AI business who couldn't afford DEC-20's
(obviously not SRI, UTexas, CMU, etc.), but who DID buy VAXen back when they
were good value for money.  And they're mostly running BSD, and they're
busily developing all the tools and software that AI folk do.  Is there any
PARTICULAR branch of AI you're interested in?  [...]

------------------------------

Date: Thu, 1 Mar 84 4:23:11 EST
From: Stephen Wolff <steve@brl-bmd>
Subject: Documentation tools

        Artificially intelligent it's not, and not even fancy; but there are
folks hereabouts that use the UNIX tools SCCS (or RCS) to do documentation
of various sorts.  Although intended for managing the writing, evolving and
maintaining of large software packages, they can't tell C from Fortran from
straight English text and they will quite cheerfully maintain for you the
update/revision tree in any case.

        I should imagine with a bit if thought you could link your code AND
documentation modules and manage 'em both simultaneously and equitably.

------------------------------

Date: Sat, 3 Mar 84 18:43:59 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Manual generators

The SCRIBE system (Brian K. Reid of CMU and Janet H. Walker of BBN)
may be close to what you are looking for.  It has automatic paragraph
numbering, automatic table-of-contents generation, automatic indexing,
and automatic bibliography.  (I use the word "automatic" somewhat
loosely.  The user has to be involved.)  A more sophisticated system,
I believe, is in use at the University of Michigan's Information
Systems Design and Optimization System (ISDOS) project.  The contact
is Prof. Dan Teichroew in the Industrial and Operations Engineering
department at Ann Arbor.  It may be avaliable to ISDOS sponsors.

  --Charlie

------------------------------

Date: Thu 1 Mar 84 20:24:33-EST
From: Howard  Reubenstein <HBR@MIT-XX.ARPA>
Subject: Looking for a Parser Generator

          [Forwarded from the MIT-MC bboard by SASW@MIT-MC.]

        A friend of mine needs a parser generator which produces
output in either FORTRAN or LISP. Does anyone know where he can
get access to one?

------------------------------

Date: Thu, 1 Mar 84 08:34 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: Re: Fermat's Last Theorem & Undecidable Propositions

Fermat's Last Theorem:

is the assertion that

                A↑N + B↑N = C↑N

has no solution in integers for N > 2.  (For N = 2, of course, all the
well-known right triangles like [3,4,5] are solutions.)

The Four-Color Theorem:

states that any planar map can be colored so that no two adjacent
regions are the same color using no more than four different colors.
(Regions must be connected; "adjacent" means having a common boundary of
finite length, i.e. not just touching at a point.

The latter was shown to be true by two mathematicians at the University
of Illinois, using a combination of traditional mathematical reasoning
and computer-assisted analysis of a large set of graphs.  An article
describing the proof can be found in a back issue of /Scientific
American/.

The former appears in a manuscript by Fermat, with a marginal notation
to the effect that he had found a slick proof, but didn't have enough
space to write it down.  This was discovered after his death, of course.
Most mathematicians believe the theorem to be true, and most do not
think Fermat is likely to have found a valid proof, but neither
proposition has been proved beyond question.

Mark

------------------------------

Date: 28 Feb 84 20:42:40-PST (Tue)
From: decvax!genrad!wjh12!n44a!ima!inmet!andrew @ Ucb-Vax
Subject: Re: Re: Fermat's Last Theorem & Undecida - (nf)
Article-I.D.: inmet.945

Fermat's Last Theorem states that the equation

    n    n    n
   A  + B  = C

has solutions in positive integers a, b, c, n only when n = 2.

The "four-color map problem" states that any map (think of, say, a map of the
US) requires at most four colors to color all regions without using the same
color for any two adjacent ones.  (This is for 2-dimensional maps.  Maps
on a sphere or torus require more - 5 and 7, I think.)

The former has neither been proven nor disproven.  The latter was "proven"
with the aid of a computer program; many feel that this does not constitute
a true proof (see all the flames elsewhere in this group).  Incidentally,
the school where it was "proven" changed their postage meters to print
"FOUR COLORS SUFFICE" on outgoing mail.

------------------------------

Date: Thu 1 Mar 84 13:53:05-PST
From: Sam Hahn <SHahn@SUMEX-AIM.ARPA>
Subject: Domestic Robotics


I find that Robotics Age (the journal of intelligent machines), published by
Robotics Age, Inc, located at:

                Strand Building
                174 Concord Street
                Peterborough, NH  03458         (603) 924-7136

is a good source of information on low-end, more personal, and thus more
"domestic"ly oriented robotics.  For example, the advertisers include

        Micromation:    voice command system for Hero-1
        Iowa Precision Robotics:
                        68000-controlled educ/pers'l robot
        Micron Techn.:  computer vision for your PC
        S.M. Robotics:  PR kit for $59.95

just to name a few from the February 1984 issue.

Their articles are also more PR-oriented, and often include some level of
design info.

I'm new to the publication myself (about 1/2 year), but find it a source of
information not elsewhere available.

                                        -- sam hahn

------------------------------

Date: 27 Feb 84 19:25:34-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: computer ECG - (nf)
Article-I.D.: uiucdcs.5890

Ivan Bratko, of the Josef Stefan Institute in Ljubljana, Yugoslavia, has
recently achieved some remarkable results. With the aid of computer
simulation he has built an expert system capable of diagnosing multiple
simultaneous heart malfunction causes from ECG outputs. This was a
significant contribution to medical science, since for the class of failures
he treated, there was no known method of diagnosing anything more complicated
than a single cause.

His work will be printed as a monograph from the newly-formed "International
School for the Synthesis of Expert Knowledge" (ISSEK), which will have its
first meeting this summer. ISSEK is an affiliation of computer science labs
dedicated to the automatic generation of new knowledge of super-human quality.
(Membership of ISSEK is by invitation only).

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign
                                        { pur-ee | ihnp4 } ! uiucdcs ! marcel

------------------------------

Date: 2 Mar 84 20:54:42 EST
From: Ron <FISCHER@RUTGERS.ARPA>
Subject: Re: computer ECG, FDA testing of AI programs

    Apparently because of fierce competition, much current information,
    particularly with regard to algorithms, is proprietary.  Worst in this
    regard (a purely personal opinion) is HP who seems to think nobody but
    HP needs to know how they do things and physicians are too dumb to
    understand anyway.
    ...
    They offer an advantage to small hospitals by offering verification of
    the analysis by a Cardiologist (for an extra fee).

What the latter seems to say is that the responsibility for accepting
the diagnosis is that of the local cardiologist.  I cannot see a
responsable doctor examining a few runs of a program's output and
proclaiming it "correct."

A hedge against complaints of computers taking over decision making
processes from human has been that we can look at the algorithms
ourselves or examine the reasons that a system concluded something.

If this information becomes proprietary the government will probably
license software for medical purposes the way the FDA does for new
drugs.

Imagine a testing procedure for medical diagnostic AI programs that is
as expensive and complicated as that for testing new drugs.

(ron)

[Ron makes a good point.  As a side issue, though, I would like
to mention that H-P has not been entirely secretive about its
techniques.  On March 8, Jim Lindauer of H-P will present a seminar
at Stanford (MJH 352, 2:45PM) on "Uses of Decision Trees in ECG
Analysis".  -- KIL]

------------------------------

Date: 29 Feb 84 15:36:33 PST (Wednesday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: U.S. Response to Japan's AI efforts

In the new "soft" computer journal from Springer-Verlag, 'Abacus', Vol.
1, #2, Winter 1984, is an essay by Eric A. Weiss reviewing Feigenbaum
and McCorduck's 'Fifth Generation' book and general AI books.  The
general A.I. review is worth reading.  The whole piece is lengthy, but I
quote only from the final section.

--Rodney Hoffman

[This is a rather remarkable book review.  In addition to discussing the
"The Fifth Generation" and several AI reference works and textbooks,
Eric Weiss describes the history and current partitioning of AI, the
disputes and alignments of the major AI centers, and the solution to
our technological race with foreign powers.  It's well worth reading.

This second issue of Abacus also has interesting articles on Ada,
the language and the countess, tomographic and NMR imaging (with
equations!), and the U.S. vs. IBM antitrust suit, as well as columns
on computers and the laws and other topics.  The magazine resembles
a Scientific American for the computer-oriented, and the NMR article
is of quality comparable to IEEE Computer.  -- KIL]

        ------------------------------------------------------
U.S. Response

On the basis of all this perspective, let me return to the Fifth
Generation Project itself and suggest that the U.S. response should be
thoughtful, considered, not guided by panic or fear, but based on
principles this nation has found fruitful:
        build on experience
        do what you do best
        encourage enthusiasm
What has been our experience with foreign science and technology?  We
know that new scientific knowledge gives the greatest benefit to those
nations which are most ready to exploit and use it, and this ready group
may not include the originating nation.... [discussion of rocketry,
automobiles, shipbuilding, steel, consumer electronics]

From this experience, the U.S. should look forward to reaping the
benefits from whatever the Japanese Fifth Generation Project develops,
and, just because we are bigger, richer, and stronger, benefiting more
from these improvements than the originating nation....

... "Do what you do best."  We do not compete with the Japanese very
well, but we do best in helping them.... [The U.S.] is best at helping
others, especially Japan, and at giving money away.... Thus, the
indicated course for the U.S. ... is to help the Japanese Fifth
Generation Project in every way we can: by supplying grants of money; by
loaning college professors; by buying and copying its product,
exploiting its scientific and technological developments and
breakthroughs as fast as they appear; and by ignoring or clucking
sympathetically over any failures or missed schedules.  Finally,...
encourage enthusiasm.

Young military people may murmur against this stance on the grounds that
military developments must be home-grown and that the development of
technology which might be used in weapons should be guided by the
military.  This assertion is borne out neither by history nor by the
present public attitude of the DoD.... [discussion of WWII anti-aircraft
guns, mines, torpedoes, and many other such]

... The advantages of letting another nation develop your military
hardware are frequently and forcefully explained to other countries by
the DoD and its industrial toadies, but these logical arguments... are
never put in their equally logical vice-versa form....

The danger is not that the Japanese will succeed -- for their successes
will result in U.S. benefits -- but that somehow we will not make prompt
use of whatever they accomplish.  We might manage this neglect if we
overdo our national inclination to  fight them and compete with them....

A related but more serious danger lies in the possibility that our
military people will get their thumbs into the American AI efforts and
make secret whatever they don't gum up.... Even the best ideas can be
killed, hurt, or at least delayed if hedged around with bureaucrats and
secrecy limitations.

... We should press vigorously forward on all fronts in the unplanned
and uncoordinated fashion that we all understand.  We should let a
thousand flowers bloom.  We should encourage everyone.... We should hand
out money.  We should transport experts.  We should jump up and down.
We should be ready to grab anybody's invention , even our own, and use
it.  We should be ready to seize winners and dump losers, even our own.
We should look big, fearless, happy, and greedy, and not tiny,
frightened, worried, and dumb.

... The conclusion is: don't bet on the Japanese, don't bet against
them, don't fear them.  Push forward with confidence that the U.S. will
muddle through -- if it can keep its government from making magnificent
plans for everyone.

------------------------------

End of AIList Digest
********************

∂06-Mar-84  1305	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #25
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 84  13:01:19 PST
Date: Tue  6 Mar 1984 11:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #25
To: AIList@SRI-AI


AIList Digest            Tuesday, 6 Mar 1984       Volume 2 : Issue 25

Today's Topics:
  Review - Laws of Form,
  Brain Theory - Parallelism,
  AI Reports - Stanford Acquisitions,
  Administrivia - New Location for List-of-Lists,
  AI Software - Portability of C-Prolog
----------------------------------------------------------------------

Date: Sat, 3 Mar 84 18:36:25 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Laws of Form

I don't pretend to be an expert on LoF but I think there are at least two
interesting aspects to it.  One is that it provides a calculus that can be used
to "compile" a set of syllogisms (page 124 of the Dutton 1979 edition).  A
second is that it does away with Russell and Whitehead's cumbersome Theory
of Types.  All orders of self-referential sets of statements can be evaluated
within the set of "imaginary" values.

You can argue that the compilation of syllogism sets (rule sets) can already
be done using truth tables.  I think that the benefit of Spencer-Brown's
calculus is that it is much more efficient and should run much faster.

Those who are really interested should loosen up and plow through the book
a few times with an open mind.  It is really very thought-provoking.

  --Charlie

------------------------------

Date: Mon 5 Mar 84 20:34:27-EST
From: David Rogers <DRogers%MIT-OZ@MIT-MC.ARPA>
Subject: parallel minds?

For a very good (if 3 years old) discussion on parallism
in the brain, refer to Hinton and Anderson's book "Parallel
Models of Associative Memory, pages 32-44 [Hin 81]. The
applicable section is entitled "Parallelism and Distribution
in the Mammalian Nervous System". Structurally, paralleism is
inherent throughout the nervous system, making simple
sequential models of human low-level cognition highly
suspect.

Though it was not openly stated in the discussion on this list,
there seems to be two issues of parallelism involved here:
low-level parallelism, and parallelism at some higher
"intellectual" level. The latter subject is rightly the domain
for experimentalists, and should not be approached with
simple AI techniques as introspection ("Well, I *feel*
sequential when I think...").

One known experimental fact does suggest a high degree of
parallelism, even in higher cognitive functions. Since
the firing rate of a neuron is on the order of
2-3 milliseconds, and some highly complex tasks (such as
face recognition) are performed in about 300 ms, it seems
clear that the brain uses massive parallelism, not just
in the visual system but throughout [Fel 79].

I would suggest that future discussions offer the reader
a few more experimental details, lest the experimental
psychologists in our midst feel unappreciated.

                              ---------
[Hin 81]
   "Parallel Models of Associative Memory, G. Hinton,
   J. Anderson, eds, Laurence Earlbaum Assoc., 1981, pages 32-44.

[Fel 79]
   "A Distributed Information Processing Model of Visual
   Memory", J.A. Feldman, University of Rochester Computer
   Science Department, TR52, December 1979.

------------------------------

Date: Sun 4 Mar 84 21:56:21-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Latest Math & CS Library "New Reports List" posted on-line.

[Every month or two Stanford announces its new CS report acquisitions.
I culled and sorted many of the citations for an earlier set of AIList
issues, but I have not gotten around to doing so for the last six
months or so.  Instead, I am forwarding this notice as an example
of the notices you can get by contacting LIBRARY@SU-SCORE.  For those
interested in FTPing the report listings, I would characterize them
as being lengthy, somewhat cryptic and inconveniently formatted, and
usually divided about equally between AI-related topics and non-AI
math/CS topics (VLSI design, hardware concepts, operating systems,
networking, office automation, etc.).  -- KIL]

The latest Math & Computer Science Library "New Reports List" has been
posted on-line.  The file is "<LIBRARY>NEWTRS" at SCORE, "NEWTRS[LIB,DOC]"
at SAIL, "<CSD-REPORTS>NEWTRS" at SUMEX, and "<LIBRARY>NEWTRS" at SIERRA.
In case you miss a reports list, the old lists are being copied to
"<LIBRARY>OLDTRS" at SCORE and "<LIBRARY>OLDTRS" at SIERRA where they will
be saved for about six months.

If you want to see any of the reports listed in the "New Reports List,"
either come by the library during the display period mentioned or send a
message to LIBRARY at SCORE, giving your departmental address and the
six-digit accession numbers of the reports you want to see, and we will
check them out in your name and send them to you as soon as they are available.

The library receives technical reports from over a hundred universities
and other institutions.  The current batch includes - among others -
reports from:


      Eidgenoessische Technische Hochschule Zuerich. Instituet fuer Informatik.
      IBM. Research Division.
      Institut National de Recherche en Informatique et en Automatique (INRIA).
      New York University. Courant Institute of Mathematical Sciences.
      U.K. National Physical Laboratory. Division of Information Technology
        and Computing.
      Universite de Montreal. Departement d'Informatique et de Recherche
        Operationnelle.
      University of Edinburgh. Department of Computer Science.
      University of Southern California. Information Sciences Institute.
      University of Wisconsin-Madison. Computer Sciences Department.




                                        - Richard Manuck
                                          Math & Computer Science Library
                                          Building 380 - 4th Floor
                                          LIBRARY at SCORE

------------------------------

Date: 1 Mar 1984 2142-PST
From: Zellich@OFFICE-3 (Rich Zellich)
Subject: New location for list-of-lists (Interest-Groups.TXT)

File Interest-Groups.TXT has been moved from OFFICE-3 and is now
available on the SRI-NIC host in file <NETINFO>INTEREST-GROUPS.TXT

Requests for copies of the list, updates to the list, etc., should be
sent to ZELLICH@SRI-NIC in the future, instead of ZELLICH@OFFICE-3 or
RICH.GVT@OFFICE-3.

Cheers,
Rich

------------------------------

Date: Wednesday, 22-Feb-84 23:45:00-GMT
From: O'Keefe HPS (on ERCC DEC-10) <OKeefe%EDXA@UCL-CS>
Subject: Portability of C-Prolog

[The following is forwarded from the Prolog digest.  I consider
it an interesting account of the difficulties of making AI
software available on different systems.  The message is
8K characters, so I have put it last in the digest for those
who want to skip over it. -- KIL]

There was a question in this Digest about whether C-Prolog had
been ported to Apollos.  I don't know about that, but I have had a
great deal to do with C-Prolog, so I can say what might give trouble
and what shouldn't.

The first thing to beware of is that there are two main versions
of C-Prolog drifting around.  The one most people have is the one
distributed by EdCAAD (which is where Fernando Pereira wrote it), and
while that runs under VAX/UNIX and VAX/VMS both, and is said to run
on at least one 68000 box, V7 C compilers don't like it much.  The
other version is distributed by EdAI on a very informal basis, but it
should be available from Silogic in a couple of weeks.  The EdAI
version has been ported to the Perq (running ICL's C-machine micro-
code and their PaNiX port of V7 UNIX) and to another C-machine called
the Orion (that compiler isn't a derivative of PCC).  C-Prolog has
something like one cast per line, the EdAI version has stronger type
declarations so that the compiler produces no warning messages.  Both
versions are essentially the same, so EdAI cannot distribute their
version to anyone who hasn't got a licence for the EdCAAD version.

What C-Prolog v1.4d.edai requires is

[1] a V7 or later C compiler
[2] pointers should be 32 bits long
[3] the compiler should support 32 bit integer arithmetic, and
    floats should be storable in 32 bits.  (In fact if anyone has
    a decent C compiler for the Dec-10 [a] please can we have a copy
    and [b] C-Prolog should run quite happily on it.)
[4] It needs to steal 3 bits out of floats, so it needs to know a bit
    about the floating-point storage format.  IEEE and VAX-11 are ok.
[5] I/O uses <stdio> exclusively.
    C-Prolog supports ~username/X and $envvar/X expansion, but if the
    "unix" identifier is not defined it knows not to ask.
[6] brk() and sbrk() are needed.  If you haven't got them, you could
    declare a huge array and use that, but that would require source
    hacking.
[7] The MAJOR portability problem is that C-Prolog assumes that all
    pointers into the area managed by brk() and sbrk() look like
    POSITIVE integers.  It doesn't matter if the stack or text areas
    lie in negative address space (in fact the stack IS in negative
    address space on the Perq and Orion).  Getting around this would
    be a major exercise, not to be undertaken by anyone without a
    thorough understanding of the way C-Prolog works.  Since we have
    a GEC series 63 machine, and since there is some political
    pressure to adopt this as a UK IKBS machine (to which application
    it is NOT suited, nor any other), and since that machine puts
    everything in negative address space, we may produce a version of
    C-Prolog which can handle this.  But don't hold your breath.

The Perq (running C) and the Orion are both word-addressed.  This is
no problem.  Getting C-Prolog running on the Orion was a matter of
telling it where to look for its files and saying "make", but then
the Orion, though nothing like a VAX, runs 4.1bsd.  Getting it going
on a Perq was harder, but the bugs were in the Perq software, not in
C-Prolog.  The main thing anyone porting C-Prolog to a new machine
with a decent C and positive address space should have to worry about
is the sizes of the data areas, in the file parms.c.

To give this message some interest for people who couldn't care
less about porting C-Prolog, here are some general notes on porting
Prolog interpreters written in C.  (I've seen seven of them, but not
UNH Prolog.)

A well written Prolog interpreter uses the stdio library, so that
I/O shouldn't be too much of a problem.  But it may also want to
rename and/or delete files, to change the working directory, or to
call the command interpreter.  These operations should be in one file
and clearly labelled as being operating-system dependent.  Porting
from one version of UNIX to another should cause no difficulty, but
there is a problem with calling the shell: people using ?.?bsd will
expect the C-shell, and an interpreter written for V7 may not know
about that.  If you change it, be sure to use the environment
variable SHELL to determine what shell to use.  (Ports to S3 should
do this too, so that users who are supposed to be restricted to rsh
can't escape to sh via prolog.)

No Prolog implementor worth his salt would dream of using malloc.
As a result, a Prolog interpreter is pretty well bound to use brk()
and/or sbrk().  It may do so only at start-up (C-Prolog does this),
or it may do so dynamically (a Prolog with a garbage collector, and
pitifully few of them have, will probably do this).  In either case
allocation is virtually certain to be word-aligned and in units of
words, where a word is a machine pointer.

There are two ways of telling what sort of thing a pointer is
pointing to.  One way is to use TAGS, that is to reserve part of the
word to hold a code saying (integer,pointer to atom,pointer to clause
pointer to variable,&c).  This is particularly tempting on machines
like the M68000 where part of an address is spare anyway.  The other
way is to divide the address space into a number of partitions, such
as (integers, atoms, clauses, global stack, local stack, trail), and
to tell what something points to by checking *where* it points.
C-Prolog could be described as "semi-tagged": integers, floats,
pointers to clauses, and pointers to records all live in the virtual
partition [-2↑31,0) and are tagged, pointers to real objects are
discriminated by where they point.  Other things being equal, tagged
systems are likely to be slower.  But tagged systems should be immune
to the "positive address space problem".  So you have to check which
sort your system is.  If it is tagged, you should check the macros
for converting between tagged form and machine addresses VERY VERY
carefully.  They may not work on your machine, and it may be possible
to do better.  Here is an example of what can go wrong.

/* Macro to convert a 24-bit byte pointer and a 6-bit tag to a
   32-bit tagged pointer
*/
#define Cons(tag,ptr) (((int)(ptr)<<8) | tag)
/* Macro to extract the tag of a tagged pointer */
#define Tag(tptr) ((tptr)&255)
/* Macro to convert a tagged pointer to a machine pointer */
#define Ptr(tptr) (pointertype)(tptr>>8)
/* Macro to find the number of words between two tagged
   pointers
*/
#define Delta(tp1,tp2) (((tp1)-(tp2))>>8)

DRAT!  That was meant to be >>10 not >>8.

What can go wrong with this?  Well, Delta can go wrong if the machine
uses word addresses rather than byte addresses, in which case it
should be >>8 as I first wrote instead of >>10.  Cons can go wrong
if the top bits of a pointer are significant.  (On the Orion the top
2 bits and the bottom 24 bits are significant.)  Ptr can go wrong
if addresses are positive and user addresses can go over 2↑23, in
which case an arithmetic right shift may do horrid things.  I have
seen at least two tagged Prolog interpreters which would go wrong on
the Orion.

Prolog interpreters tend to be written by people who do not know
all the obscure tricks they can get up to in C, so at least you ought
not be plagued by the "dereferencing 0" problem.

If anyone reading this Digest has problems porting C-Prolog other
than the positive address space problem, please tell me.  I may be
able to help.  There is one machine with a C compiler that someone
tried to port it to, and failed, and that is a Z8000 box where the
user's address space is divided up into a lot of 64kbyte chunks, and
the chunks aren7t contiguous!  A tagged system could handle that,
though with some pain.  C-Prolog can't handle it at all.

If anyone has already ported some version of C-Prolog to another
machine (not a VAX, Perq/UNIX, Orion, or M68000/UNIX) please let me
know so that we can maintain a list of C-Prolog versions, saying
what machine, what problems, and whether your version is available to
people holding an EdCAAD licence.

------------------------------

End of AIList Digest
********************

∂06-Mar-84  1615	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #26
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 84  16:15:00 PST
Date: Tue  6 Mar 1984 15:09-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #26
To: AIList@SRI-AI


AIList Digest           Wednesday, 7 Mar 1984      Volume 2 : Issue 26

Today's Topics:
  Seminars - Extended Prolog Theorem Prover &
    A Model of LISP Computation &
    YOKO Random Haiku Generator &
    Emulation of Human Learning &
    Circuit Design by Knowledge-Directed Search &
    Knowledge Structures for Automatic Programming &
    Mathematical Ontology &
    Problem Solving in Organizations &
    Inequalities for Probablistic Knowledge
  Conference - STeP-84 Call for Papers
----------------------------------------------------------------------

Date: 29 Feb 84 13:54:56 PST (Wednesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 3/8/84

[Forwarded from the SRI-AI bboard by Laws@SRI-AI.]


                Mark E. Stickel
                SRI International

                A Prolog Technology Theorem Prover

An extension of Prolog, based on the model elimination theorem-proving
procedure, would permit production of a Prolog technology theorem prover
(PTTP). This would be a complete theorem prover for the full first-order
predicate calculus, not just Horn clauses, and provide capabilities for
full handling of logical negation and indefinite answers. It would be
capable of performing inference operations at a rate approaching that of
Prolog itself--substantially faster than conventional theorem-proving
systems.

PTTP differs from Prolog in its use of unification with the "occurs
check" for soundness, the complete model elimination input inference
procedure, and a complete staged depth-first search strategy. The use of
an input inference procedure and depth-first search minimize the
differences between this theorem-proving method and Prolog and permit the
use of highly efficient Prolog implementation techniques.

        Thursday, March 8, 1984 4:00 pm
        Hewlett Packard
        Stanford Division
        5M Conference room

        1501 Page Mill Rd
        Palo Alto

        *** Be sure to arrive at the building's lobby on time, so that you may
be escorted to the meeting room.

------------------------------

Date: Wed 29 Feb 84 13:07:26-PST
From: MESEGUER@SRI-AI.ARPA
Subject: A Model of LISP Computation

           [Forwarded from the CLSI bboard by Laws@SRI-AI.]


                REWRITE RULE SEMINAR AT SRI-CSL
                    Wednesday March 7, 3:00 pm

                   A Model of Computation
         Theory and application to LISP-like systems

                      Carolyn Talcott
                    Stanford University

The goal of this work is to provide a rich context in which a
variety of aspects of computation can be treated and where new
ideas about computing can be tested and developed.  An important
motivation and guide has been the desire to understand the construction
and use of LISP like computation systems.

The first step was to define a model of computation and develop the
theory to provide basic tools for further work.  The main components are

    - basic model and notion of evaluation
    - equivalence relations and extensionality
    - an abstract machine as a subtheory
    - formalization of the metatheory

Key features of this theory are:

    - It is a construction of particular theories uniformly
      from given data structures (data domain and operations).

    - Focus is on control aspects of computation

    - A variety of objects
      Forms  -- for describing control aspects of computation
      Pfns  -- abstraction of form in an environment
            -- elements of the computation domain
            -- computational analogue of partial functions
      Carts -- for collecting arguments and values
      Envs -- intepretation of symbols appearing in forms
      cTrees -- objects describing particular computations


Applications of this theory include

   -  proving properties of pfns
   -  implementation of computation systems
   -  representing and mechanizing aspects of reasoning


In this talk I will describe RUM -  the applicative
fragment (flavor).  RUM is the most mathematically
developed aspect of the work and is the foundation
for the other aspects which include implementation
of a computation system called SEUS.

------------------------------

Date: 1 Mar 1984 10:00:33-EST
From: walter at mit-htvax
Subject: GRADUATE STUDENT LUNCH

               [Forwarded from the MIT-MC bboard by SASW@MIT-MC.]


                     Computer Aided Conceptual Art (CACA)
                      Eternally Evolving Seminar Series
                                   presents

                        YOKO: A Random Haiku Generator

Interns gobble oblist hash      | We will be discussing YOKO and the
Cluster at operations           | related issues of computer modeling
Hidden rep: convert!            | of artists, modeling computer artists,
                                | computer artists' models, computer
Chip resolve to bits            | models of artists' models of computers,
Bus cycle inference engine      | artist's cognitive models of computers,
Exposing grey codes             | computers' cognitive models of artists
                                | and models, models' models of models,
Take-grant tinker bucks         | artists' models of computer artists,
Pass oblist message package     | modelling of computer artists' cognitive
Federal express                 | models and artist's models of cognition.

                     Hosts: Claudia Smith and Crisse Ciro
                         REFRESHMENTS WILL BE SERVED

------------------------------

Date: 1 Mar 84 09:26:46 EST
From: PETTY@RUTGERS.ARPA
Subject: VanLehn Colloquium on Learning

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


          SPEAKER:   Dr. Kurt VanLehn
                     Xerox Corp.
                     Palo Alto Research Center

          TITLE:    "FELICITY CONDITIONS FOR HUMAN SKILL ACQUISITION"

A theory of how people learn certain procedural skills will be
presented.  It is based on the idea that the teaching and learning
that goes on in a classroom is like an ordinary conversation.  The
speaker (teacher) compresses a non-liner knowledge structure (the
target procedure) into a linear sequence of utterances (lessons).  The
listener (student) constructs a knowledge structure (the learned
procedure) from the utterance sequence (lesson sequence).  In recent
years, linguists have discovered that speakers unknowingly obey
certain constraints on the sequential form of their utterances.
Apparently, these tacit conventions, called felicity conditions or
conversational postulates, help listeners construct an appropriate
knowledge structure from the utterance sequence.  The analogy between
conversations and classrooms suggests that there might be felicity
conditions on lesson sequences that help students learn procedures.
This research has shown that there are.  For the particular kind of
skill acquisition studied here, three felicity conditions were
discovered.  They are the central hypotheses in the learning theory.
The theory has been embedded in a model, a large AI program.  The
model's performance has been compared to data from several thousand
students learning ordinary mathematical procedures:  subtracting
multidigit numbers, adding fractions and solving simple algebraic
equations.  A key criterion for the theory is that the set of
procedures that the model "learns" should exactly match the set of
procedures that students actually acquire including their "buggy"
procedures.  However, much more is needed for psychological validation
of this theory, or any complex AI-based theory, than merely testing
its predictions.  Part of the research has involved finding ways to
argue for the validity of the theory.

           DATE:   Tuesday, March 6, 1984
           TIME:   11:30 a.m.
           PLACE:  Room 323 - Hill Center

------------------------------

Date: 1 Mar 84 09:27:06 EST
From: PETTY@RUTGERS.ARPA
Subject: Tong Colloquium on Knowledge-Directed Search

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


        SPEAKER:   Christopher Tong

        TITLE: "CIRCUIT DESIGN AS KNOWLEDGE-DIRECTED SEARCH"

     The process of circuit design is usefully viewed as search through
a large space of circuit descriptions. The search is knowledge-diverse
and knowledge-
intensive: circuits are described at many levels of abstraction (e.g.
architecture, logic, layout); designers use many kinds of knowledge and
styles of reasoning to pursue and constrain the search.

     This talk presents a preliminary categorization of knowledge about
the design process and its control. We simplify the search by using a
single processor-oriented language to cover the function to structure
spectrum of circuit abstractions. We permit the circuit design and the
design problem (i.e. the associated goals) to co-evolve; nodes in the
design space contain explicit representations for goals as well as
circuits. The design space is generated by executing tasks, which
construct and refine circuit descriptions and goals (aided by libraries
of components of goals). The search is guided locally by goals and
tradeoffs; globally it is resource-limited (in design time and quality),
conflict-
driven, and knowledge-intensive (drawing on a library of strategies).

     Finally, we describe an interactive knowledge-based computer
program called DONTE (Design ONTology Experiment) that is based on the
above framework. DONTE transforms architectural descriptions of a
digital system into circuit-level descriptions.

             DATE:  Thursday, March 8, 1984
             TIME:  2:50 p.m.
             PLACE:  Room 705 - Hill Center
                   *  Coffee Served at 2:30 p.m.  *

------------------------------

Date: 1 Mar 84 09:27:23 EST
From: PETTY@RUTGERS.ARPA
Subject: Ferrante Colloquium on Automatic Programming

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

               SPEAKER:  Jeanne Ferrante
                         IBM Thomas J. Watson Research Center
                         Yortown Heights, NY

               TITLE:   "PROGRAMS = CONTROL + DATA

A new program representation called the program dependence graph or
PDG is presented which makes explicit both the data values on which
an operation depends (through data dependence edges) and the control
value on which the execution of the operation depends (through control
dependence edges).  The data dependence relationships determine the
necessary sequencing between operations with the same control
conditions, exposing, exposing potential parallelism.  In this talk we
show how the PDG can be used to solve a traditional stumbling block in
automatic program improvement.  A new incremental solution to the
problem of updating data flow following changes in control flow such
as branch deletion is presented.

The PDG is the basis of current work at IBM Yorktown Heights for
compiling programs in sequential languages like FORTRAN to exploit
parallel architectures.

               DATE:  Friday, March 9, 1984
               TIME:  2:50 p.m.
               PLACE:  Room 705 - Hill Center
                      *  Coffee Served at 2:30 p.m.  *

------------------------------

Date: 5 Mar 84 17:45 PST
From: Guibert.pa@PARC-MAXC.ARPA
Subject: Talk by David McAllester: Mon. Mar. 12 at 11:00 at PARC

[Forwarded from the CSLI bboard by Laws@SRI-AI.]

Title: "MATHEMATICAL ONTOLOGY"

Speaker: David McAllester (M.I.T.)
When: Monday March 12th at 11:00am
Where: Xerox PARC Twin Conference Room, Room 1500

        AI techniques are often divided into "weak" and "strong" methods.  A
strong method exploits the structure of some domain while a weak method
is more general and therefore has less structure to exploit.  But it may
be possible to exploit UNIVERSAL structure and thus to find STRONG
GENERAL METHODS.  Mathematical ontology is the study of the general
nature of mathematical objects. The goal is to uncover UNIVERSAL
RELATIONS, UNIVERSAL FUNCTIONS, and UNIVERSAL LEMMAS which can be
exploited in general inference techniques.  For example there seems to
be a natural notion of isomorphism and a standard notion of essential
property which are universal (they can be meaningfully applied to ALL
mathematical objects).  These universal relations are completely ignored
in current first order formulations of mathematics. A particular theory
of mathematical ontology will be discussed in which many natural
universal relations can be precisely defined.  Some particular strong
general inference techniques will also be discussed.

------------------------------

Date: 5 Mar 1984  22:41 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: AI Revolving Seminar

[Forwarded from the MIT-XX bboard by Laws@SRI-AI.]

Wednesday, March 7   4:00pm   8th floor playroom


               Knowledge and Problem Solving Processes
                         in Organizations

                            Gerald Barber


Human organizations have frequently been used as models for AI systems
resulting in such theories as the scientific community metaphor, the
society of mind and contract nets among others.  However these human
organizational models have been limited by the fact that do no take
into account the epistemological processes involved in organizational
problem solving.  Understanding human organizations from an
epistemological perspective is becoming increasingly important as a
source of insight into intelligent activities and for computer-based
technology as it becomes more intricately involved in organizational
activities.

In my talk I will present the results of an organizational study which
attempted to identify problem solving and knowledge processing
activities in the organization.  I will also outline the possibilities
for development of both human organizational models and artificial
intelligence systems in light of this organizational study.  More
specifically, I will also discuss the shortcoming of organizational
theories and application of the results of this work to highly
parallel computer systems such as the APIARY.

------------------------------

Date: Tue 6 Mar 84 09:05:05-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT -- Friday, March 9, 1984

      [Forwarded from the SIGLUNCH distribution by Laws@SRI-AI.]

Friday,   March 9, 1984
LOCATION: Braun Lecture Hall (smaller), ground floor of Seeley Mudd
          Chemistry Building (approx. 30 yards west of Gazebo)
12:05

SPEAKER:  Ben Grosof
          Stanford University, HPP

TOPIC:    AN INEQUALITY PARADIGM FOR PROBABILISTIC KNOWLEDGE
          Issues in Reasoning with Probabilistic Statements

BACKGROUND:     Reasoning with probabilistic knowledge and evidence is
a key aspect of  many AI systems.  MYCIN  and PROSPECTOR were  pioneer
efforts but were limited and  unsatisfactory in several ways.   Recent
methods  address  many  problems.    The  Maximum  Entropy   principle
(sometimes called  Least  Information)  provides  a  new  approach  to
probabilities. The Dempster-Shafer theory  of evidence provides a  new
approach to confirmation and disconfirmation.

THE TALK: We begin by relating probabilistic statements to logic.   We
then  review  the  motivations  and  shortcomings  of  the  MYCIN  and
PROSPECTOR  approaches.   Maximum  Entropy  and  Dempster-Shafer   are
presented, and recent work using them is surveyed.  (This is your  big
chance to  get up  to date!)   We  generalize both  to a  paradigm  of
inequality constraints on  probabilities.  This  paradigm unifies  the
heretofore divergent  representations  of probability  and  evidential
confirmation in  a formally  satisfactory  way.  Least  commitment  is
natural.  The interval  representation for  probabilities includes  in
effect a meta-level which allows  explicit treatment of ignorance  and
partial information,  confidence  and  precision,  and  (in)dependence
assumptions.  Using bounds  facilitates reasoning ABOUT  probabilities
and evidence.  We extend the Dempster-Shafer theory significantly  and
make an  argument  for  its  potential,  both  representationally  and
computationally.  Finally we list some open problems in reasoning with
probabilities.

------------------------------

Date: Fri, 2 Mar 84 11:18 EST
From: Leslie Heeter <heeter%SCRC-VIXEN@MIT-MC.ARPA>
Subject: STeP-84

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

        In addition to the call for papers below, Eero Hyvonen
has asked me to announce that they are looking for a lecturer
for the tutorial programme. The tutorial speaker should preferably
have experience in building industrial expert systems. For a few
hours' lecture, they are prepared to pay for the trip, the stay, and
some extra.

Exhibitors and papers are naturally welcome, too.


                        C A L L  F O R  P A P E R S

                                STeP-84

                Finnish Artificial Intelligence Symposium
                (Tekoalytutkimuksen paivat)
                Otaniemi, Espoo, Finland
                August 20-22, 1984


Finnish Artificial Intelligence Symposium (STeP-84) will be held at
Otaniemi campus of Helsinki University of Technology. The purpose of
the symposium is to promote AI research and application in Finland.

Papers (30 min) and short communications (15 min) are invited on
(but not restricted to) the following subfields of AI:

Automaattinen ohjelmointi       (Automatic Programming)
Kognitiivinen mallittaminen     (Cognitive Modelling)
Asintuntijajarjestelmat         (Expert Systems)
Viidennen polven tietokoneet    (Fifth Generation Computers)
Teolliset sovellutukset         (Industrial Applications)
Tietamyksen esittaminen         (Knowledge Representation)
Oppiminen                       (Learning)
Lisp-jarjestelmat               (Lisp Systems)
Logikkaohjelmointi              (Logic Programming)
Luonnollinen kieli              (Natural Language)
Hahmontunnistus                 (Pattern Recognition)
Suunnittelu ja etsinta          (Planning and Search)
Filosofiset kysymykset          (Philosophical Issues)
Robotiikka                      (Robotics)
Lauseen todistaminen            (Theorem Proving)
Konenako                        (Vision)

The first day of the symposium is reserved for the Tutorial programme
on key areas of AI presented by foreign and Finnish experts. There will
be an Industrial Exhibition during the symposium. Submission deadline
for one page abstracts of papers and short communications is April 15th.
Camera ready copy of the full text is due by July 31st. The address of
the symposium is:

        STeP-84
        c/o Assoc. Prof. Markku Syrjanen
        Helsinki University of Technology
        Laboratory of Information Processing Science
        Otakaari 1 A
        02150 Espoo 15                          Telex: +358-0-4512076
        Finland                                 Phone: 125161 HTKK SF

Local Arrangements:

Eero Hyvonen, Jouko Seppanen, and Markku Syrjanen
helsinki University of Technology

Program Committee:

Kari Eloranta                           Erkki Lehtinen
    University of Tampere                   University of Jyvaskyla
Seppo Haltsonen                         Seppo Linnainmaa
    Helsinki University of Tech.            University of Helsinki
Rauno Heinonen                          Klaus Oesch
    State Technical Research Centre         Nokia Corp.
Harri Jappinen                          Martti Penttonen
    Sitra Foundation                        University of Turku
Matti Karjalainen                       Matti Pietikainen
    Helsinki University of Tech.            University of Oulu
Kimmo Koskenniemi                       Matti Uusitalo
    University of Helsinki                  Finnish CAD/CAM Association
Kari Koskinen
    Finnish Robotics Association

Organised under the auspices of Finnish Computer Science Society.
Conference languages will be Finnish, Swedish, and English.

------------------------------

End of AIList Digest
********************

∂07-Mar-84  1632	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #27
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Mar 84  16:30:06 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Wed  7 Mar 1984 15:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #27
To: AIList@SRI-AI


AIList Digest            Thursday, 8 Mar 1984      Volume 2 : Issue 27

Today's Topics:
  Automatic Programming - Request for Bibliography,
  Pattern Recognition - Request for Character Recognition Algorithms,
  Expert Systems - Request for MYCIN Source Code, Tutorial,
  AI Tools - IQLISP Source,
  Mathematics - The Four-Color Theorem,
  AI Literature - The Artificial Intelligence Report,
  Expert Systems - EURISKO/AM Overview
----------------------------------------------------------------------

Date: 6 Mar 1984 1612-EST
From: CASHMAN at DEC-MARLBORO
Subject: For AI digest: request for program synthesis bibliography

Does anyone have (a pointer to) a bibliography (preferably annotated) of
papers on program synthesis?  Is there a good survey paper or article on
the field (other than what's in the Handbook of AI)?

  -- Paul Cashman (Cashman@DEC-Marlboro)

[Richard Waldinger (@SRI-AI) suggests a survey and bibliography on
program synthesis in "Synthesis: Dreams -> Programs" which appeared in
the IEEE Transactions on Software Engineering about 1975.  -- KIL]

------------------------------

Date: 7 Mar 1984 0643 PST
From: Richard B. August <AUGUST@JPL-VLSI>
Reply-to: AUGUST@JPL-VLSI
Subject: SEARCH FOR PATTERN/CHARACTER RECOGNITION ALGORITHMS,
         ARTICLES ETC.

BEGINNING RESEARCH ON CHARACTER RECOGINITION TECHNIQUES.
OBJECTIVE: DEVELOP CHARACTER INPUT DEVICE (WAND) TO ACCEPT THE MAJORITY
OF FONTS FOUND IN PUBLICATIONS.

POINTER TO PUBLICATIONS ARE HELPFUL.

THANKS

REGARDS RAUGUST

[The international joint conferences on pattern recognition would
be a good place to start.  Proceedings are available from the IEEE
Computer Society.  A 1962 book I've found interesting is "Optical
Character Recognition" by Fischer, et al.  Good luck (you'll need
it).  -- KIL]

------------------------------

Date: Wed, 7 Mar 84 14:14:35 PST
From: William Jerkovsky <wj@AEROSPACE>
Subject: MYCIN

I would like to execute a simple problem on MYCIN. I have recently gotten
interested in expert systems; since my wife is bacteriologist I think both
of us would enjoy the interaction with the program via our home computer
(terminal).

Can anyone point out the way to get a (free) copy of MYCIN (even if it is
only a simple early version)? Is there a way I can execute a version
interactively from home without actually getting a copy? Does anybody know
of an on-line tutorial on MYCIN? Is there a simple version of MYCIN (or a
reasonable facsimile) which runs on an

Apple //e or on an IBM PC?

I'll appreciate whatever help I can get.

Thanks

Bill Jerkovsky

------------------------------

Date: Tue 6 Mar 84 15:48:55-PST
From: Sam Hahn <SHahn@SUMEX-AIM.ARPA>
Subject: IQLISP Source

The source for IQLisp is:

        Integral Quality, Inc.
        P.O. Box 31970
        Seattle, WA  98103
        (206) 527-2918

Claims to be similar to UCI Lisp, except function def's are stored in cells
within identifiers, not on property lists; arg. handling is specified in the
syntax of the expression defining the function, I/O functions take an explicit
file argument, which defaults to the console; doesn't support FUNARGS.

IQLisp does provide:
        32kb character strings,
        77000 digit long integers,
        IEEE format floating point,
        point and line graphics,
        ifc to assembly coded functions,
        31 dimensions to arrays,

Costs $175 for program and manual, PCDOS only.

I've taken the liberty to include some of their sales info for those who may
not have heard of IQLisp.  It's fairly new, and they claim to soon make a
generic MSDOS version (though probably without graphics support).

------------------------------

Date: Wed, 7 Mar 84 09:16 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: Re: The Four-Color Theorem

By "planar map" in my previous message I meant to connote a structure on
a two-dimensional surface, not strictly a flat plane.  In fact, the
plane and the sphere are topologically equivalent (the plane is a sphere
if infinite radius) so four colors suffice for both; for the torus,
which has a different "connectivity," it has long been known that seven
colors are both necessary and sufficient.

I'm not a mathematician but at one time (right after reading an article)
I felt as if I understood the proof.  As I recall it is based on the
fact that if there are any maps that require five colors there is a
minimal (smallest) map that requires five colors.  It is possible to
construct sets of graphs (representing map regions) of varying
complexity for which any map must include at least one member of the
set.  It is also possible to determine for some particular graph whether
it can be "reduced" (so that it represents fewer regions) without
altering its four-colorability or its interactions with its neighbors.
Clearly the minimal five-color map cannot contain a "reducible" graph
(else it is not minimal).

Evidently, if one can construct a set of graphs of which ANY map must
contain at least one member, and show that EVERY member of that set is
reducible, then the minimal five-color map cannot exist; hence no
five-color map can exist.  Now if it were possible to construct such a
set with, say, 20 graphs one could show explicitly BY HAND that each
member was reducible.  No one would call such a proof "ugly" or "not a
true proof;" it might not be considered particularly elegant but it
wouldn't be outside the mainstream of mathematical reasoning either (and
it doubtless would have been found years ago).  The problem with the
actual case is that the smallest candidate set of graphs had thousands
of members.  What was done in practice was to devise algorithms which
would succeed at reducing "most" (>95%?) reducible graphs.  So most of
the graph reduction was done by computer, the remaining cases being done
by hand.  (I understand that to referee the paper another program had to
be written to check the performance of the first.)

I would like to hear any criticism of the Illinois proof that is more
specific than "ugly" or "many feel that this does not constitute a true
proof."  A pointer to the mathematical literature will suffice; my
impression is that the four-color theorem is widely accepted as having
been proved.  (We may be getting a bit far afield of AI here; I would
say that my impression of the techniques used in the automatic reduction
program was that they were not "artificial intelligence," but since they
were manifestly "artificial" I hesitate to do so for fear of rekindling
the controversy over what constitutes "intelligence!")

Mark

------------------------------

Date: Tue 6 Mar 84 10:58:51-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: The Artificial Intelligence Report

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

I have received a sample copy of the Artificial Intelligence Report, vol. 1
number 1, January 1984. It is being published locally and will have ten
issues per year.  It is more of a newsletter type publication with the
latest information on research (academic and industrial) and applied AI
within industry.  The cost is $250 per year.  The first issue has 15 pages.
I will place on the new journal shelf.  [...]

[I may try to start charging for AIList ...  -- KIL]

------------------------------

Date: Tue, 6 Mar 84 16:50 PST
From: "Allen Robert"@LLL-MFE.ARPA
Subject: EURISKO/AM review (415) 422-4881


In response to Rusty's request regarding EURISKO (V2,#22),  the  following
is a brief excerpt from my thesis qualifying/background paper on knowledge
acquisition in expert systems.  I tried to summarize the  system  and  its
history;   a  lot  of  detail has been removed.  I hope the description is
accurate;  please feel free to criticize.


EURISKO is a part of  Doug  Lenat's  investigation  of  machine  learning,
drawing   its   roots  from  his  Stanford  Ph.D.   thesis  research  with
AM [Lenat 76].  AM was somewhat unusual among learning systems in that  it
does  not have an associated performance element (expert system).  Rather,
AM is supplied with an initial  knowledge  base  representing  simple  set
theoretic  concepts,  and  heuristics  which  it  employs to explore those
concepts.   The  goal  is  for  AM  to  search  for  new   concepts,   and
relationships between concepts, guided by those heuristics.

AM represents concepts (eg.  prime and  natural  numbers)  in  frames.   A
frame's  slots  describe  attributes  of  the  concept,  such as its name,
definition, boundary values, and examples.  A definition slot includes one
or  more  LISP  predicate  functions;   AM applies definition functions to
objects (values, etc.) to determine  whether  they  are  examples  of  the
concept.   For  instance,  the  Prime  Number frame has several definition
predicates which can each determine (for different  circumstances)  if  an
integer   is  prime  or  not;   those  predicates  (and  boundary  values)
effectively define the concept "prime number" within AM.

Any slot may have zero or more heuristics, expressed as production  rules,
expressing strategies for exploring concepts.  Heuristics primarily obtain
or verify slot values;  the may also  postulate  new  concepts/frames,  or
specify  tasks  to  be  performed.   AM  maintains  an  "agenda  of tasks"
expressed as goals, in the form "Find or verify the value of slot S,  from
concept/frame  C."  The  basic  control  structure selects a task from the
agenda, and checks the slot (S) for heuristics.  If one or more are found,
a  rule  interpreter  is  invoked  to  execute  them.   If  slot  S has no
heuristics, it may point (possibly  through  several  levels)  to  another
frame  whose  corresponding  (same  name)  slot  does, in which case those
heuristics are executed;  thus, heuristics from higher-level concepts  may
be  employed  or  inherited  in  exploring  less  abstract concepts.  This
continues until all the related heuristics are executed;  AM then  returns
to the agenda for a new goal.

AM is provided with an  initial  knowledge  base  of  around  one  hundred
frames/concepts   from   finite  set  theory,  which  include  around  250
heuristics.  The system is then "set loose"  to  explore  those  concepts,
guided  by  heuristics;   AM  postulates new concepts and then attempts to
judge their  validity  and  utility.   Over  a  period  of  time,  AM  may
conjecture  and  explore  several  hundred  new concepts;  some eventually
become well established and are  themselves  used  as  extensions  of  the
initial knowledge base.

AM never managed to discover concepts  that  were  not  already  known  in
mathematics;   however  it  did  discover  many  well  known  mathematical
principles (eg.  de Morgan's laws, unique factorization),  some  of  which
were  originally  unknown to Lenat.  It was hoped that AM might might also
be applied to  the  domain  of  heuristics  themselves;   i.e.   exploring
heuristic  concept/frames  instead of mathematical concept/frames, but the
system did not make  much  progress  in  this  area.   Lenat  explains  an
underlying   problem : AM's   representation  of  domain  knowledge  (LISP
functions) is fundamentally similar  to  the  primitives  of  mathematical
notation,  while  heuristics  lack  a  similar close relationship.  He has
developed  new  ideas  regarding  the  meaning   and   representation   of
heuristics,   which   are   being   explored   with  the  AM's  successor,
EURISKO [Lenat 82,83a,83b].

One significant lesson learned from AM, and being applied in  EURISKO,  is
(roughly)  that  explicit  treatment  of heuristics and meta-knowledge (as
well as assertive domain knowledge) is a necessary condition for  learning
heuristics  (and  assertive  domain  knowledge).   The  main  focus of the
EURISKO project is to  investigate  representation  and  reasoning/control
issues   related  to  learning  (heuristics,  operators,  and  new  domain
objects).  Also, where concepts in AM were related to mathematical notions
(like  Prime  Numbers),  flexibility  is  an important design criteria for
EURISKO, which is being applied  to  a  number  of  problem  domains  (see
[Lenat 83b]).

Like AM, EURISKO is a frame based system which represents  domain  objects
in   frames.    However,   where   AM  attached  heuristics  to  slots  in
concepts/frames, EURISKO represents heuristics themselves as  frames.   In
general,  EURISKO  goes  much  further  than AM in explicitly defining and
representing knowledge at many levels;  everything possible is  explicitly
represented  as  an  object.   For  example, every kind of slot (eg.  ISA,
Examples) has a frame associated with it, which  explicates  the  meanings
and  operations  of  the slot.  This allows the system to reason with each
kind of slot (as well as with the slot value), for example to know whether
a  particular  type  of  slot  represents guaranteed, probable, or assumed
knowledge.

Part of the approach in EURISKO is to  emphasize  the  importance  of  the
representation  language itself in solving a problem.  The RLL frame based
language [Greiner 80] was developed for  this  purpose.   In  RLL,  almost
every object (notably including heuristics) is represented as an explicit,
discrete frame ("unit" as they are called in RLL).  Thus heuristics become
objects which a system can use, manipulate, and reason about just like any
other object.  Without going into details, RLL has a  number  of  features
which  are  oriented  toward  explicit  representation and manipulation of
domain knowledge, both factual and heuristic.  It has a more sophisticated
"multiple-agendae" control structure which is itself represented as frames
in the knowledge base.  Operations with and on frames  include  a  lot  of
bookkeeping  by  RLL, intended to retain explicit knowledge which was lost
in AM.  Because heuristics are explicitly represented objects (frames), it
is possible for built-in or domain-specific knowledge to be applied to the
learning of heuristics (i.e.  using built-in heuristics which specify  how
to postulate and explore new heuristics).

EURISKO has been notably successful as both a learning and  a  performance
(expert)  system in a number of domains.  [Lenat 83b] describes the use of
EURISKO in playing the Traveller  Trillion  Credit  Squadron  (TCS)  game,
where it has won two national tournaments, and discovered some interesting
playing strategies.  In [Lenat 82a], EURISKO's application to  "high-rise"
VLSI  circuit design is described.  EURISKO constructed a number of useful
devices and circuits, and has discovered  some  important  heuristics  for
circuit design.

                              ----------

Greiner, R., Lenat, D.  1980.  "A Representation Language Language." Proc.
AAAI 1, pp.  165-169.

Lenat, D.B.  1976.  "AM:  An artificial intelligence approach to discovery
in   mathematics   as  heuristic  search."  Ph.D.   Diss.   Memo  AIM-286,
Artificial Intelligence Laboratory, Stanford University, Stanford,  Calif.
(Revised  version  in R.  Davis, D.  Lenat (Eds.), Knowledge Based Systems
in Artificial Intelligence.  New York:  McGraw-Hill.  1982.)

Lenat, D.B.  1982.  "The nature of heuristics." AI Journal 19:2.

Lenat, D.B.  1983a.  "The nature of heuristics II." AI Journal 20:2.

Lenat, D.B.  1983b.  "EURISKO:  A program that learns new  heuristics  and
domain concepts, The nature of heuristics III." AI Journal 21:1-2.

                              ----------
Rob Allen <ALLEN ROBERT@LLL-MFE.ARPA>

------------------------------

End of AIList Digest
********************

∂09-Mar-84  2228	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #28
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Mar 84  22:28:19 PST
Date: Fri  9 Mar 1984 21:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #28
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Mar 1984      Volume 2 : Issue 28

Today's Topics:
  Games - SMAuG Player Simulation,
  Mathematics - The Four-Color Theorem,
  AI Tools - Interlisp Availability,
  Review - Playboy AI Article,
  Expert Systems - Computer Graphics & Hardware/Software Debugging,
  Expert Systems - Production Tools,
  Review - Laws of Form
----------------------------------------------------------------------

Date: 8 Mar 84 17:17:53 EST
From: GOLD@RU-BLUE.ARPA
Subject: a request for suggestions....

Some of you may be aware of the project known as SMAuG (Simultaneous
Multiple AdventUrer Game) that is ongoing at Rutgers University.  It is
an applied research project designed to examine the problems of distrib-
uting the work of a complex piece of software accross local intelligent
devices and a remote timesharing computer.  The software is a multiple
player adventure game.

Within the game a player may interact with other players, or with
software controlled players referred to as Non Player Characters (NPC's).
The NPC's are the area of the project which I am personally involved
with and for which I write to this bboard.  There are many interesting
subtopics within the NPC issue.  NPC communication, self mobility,
acquisition of knowledge, and rescriptability just to name a few.
The object is to create an NPC which can interact with a player character
without making it obvious that the it is machine controlled and not
another player character.  [Aha! Another Turing test! -- KIL]

I would like to request suggestions of relevent publications that I
should be familiar with.  This is a large project, but I am loathe
to make it even larger by ignoring past work that has been done.
I would greatly appreciate any suggestions for books, journal articles,
etc. that might offer a new insight into the problem.

Please send responses to Gold@RU-Blue.

Thank you very much,

Cynthia Gold

------------------------------

Date: Thu 8 Mar 84 10:54:46-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: Re: The Four-Color Theorem

I am not familiar with the literature on the 4-color proof, nor with whether it
is commonly accepted.  I do however have a lot of experience with computer
programs and have seen a lot of subtle bugs that do not surface 'til long after
everyone is convinced the software works for all possible cases after having
used it.  The fact that another person wrote a different program that got the
same results means little as the same subtle bugs are likely to be unforeseen
by other programmers.  If the program is so complicated that you cannot prove
it or its results correct, then I think the mathematicians would be foolish
to accept its output as a proof.

David

------------------------------

Date: Thu 8 Mar 84 09:59:26-PST
From: Slava Prazdny <Prazdny at SRI-KL>
Subject: Re: The Four-Color Problem

re: the 4-color problem
A nice overview paper by the authors is in "Mathematics Today",
L.A.Steen (ed),Vintage Books, 1980.

------------------------------

Date: 8 Mar 1984 11:22-PST
From: Raymond Bates <RBATES at ISIB>
Subject: Interlisp Availability

A version of Interlisp is available from ISI that runs on the VAX
line of computers.  We have versions for Berkeley UNIX 4.1 or 4.2
and a native VMS version.  It is a full and compete
implementation of Interlisp.  For more information send a message
to Interlisp@ISIB with your name and address or send mail to:

Information Science Institute
ISI-Interlisp Project
4676 Admiralty Way
Marina del Rey, CA  90292

Interlisp is a programming environment based on the lisp
programming language.  Interlisp is in widespread use in the
Artificial Intelligence community.  It has an extensive set of
user facilities, including syntax extensions, uniform error
handling, automatic error correction, an integrated
structure-based editor, a sophisticated debugger, a compiler and
a file system.

P.S.  I just got AGE up and running under ISI-Interlisp (the new
name of Interlisp-VAX) and will start to work on EMYCIN soon.

/Ray

------------------------------

Date: Thu 8 Mar 84 20:35:02-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Playboy 4/84 article: AI-article by Lee Gomes

If you needed an excuse to read playboy (even deduct it from your taxes ?? )
on page 126 is an article:

        The Mind of a New Machine.  can the science of artificial intelligence
                produce a computer that's smarter than the men who build it?

nothing earth-shaking, a little history, a little present state of the art,
a little outlook into the future.  but, it's interesting what's being fed
to this audience.  Something to hand to a friend who wants to know what
this is all about, and doesn't mind getting side-tracked by "Playmates
Forever" on page 129.

        Enjoy or Suffer, it's your choice.

------------------------------

Date: Thu 8 Mar 84 16:55:31-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems in Computer Graphics

The February issue of IEEE Computer Graphics and Applications has a short
blurb on Dixon and Simmons' expert system for mechanical engineering design.
Following the blurb, on p. 61, is the notice

    IEEE Computer Graphics and Applications is planning an issue
    featuring articles on expert systems in computer graphics
    applications in early 1985.  Those interested in contributing
    should contact Carl Machover, Machover Associates, Inc., 199
    Main St., White Plains, NY 10601; (914) 949-3777.


The issue also contains an article on "Improved Visual Design for Graphics
Display" by Reilly and Roach.  The authors mention the possibility of
developing an expert consulting system for visual design that could be
used to help programmers format displays.  (I think automated layout
for the graphics industry would be even more useful, and an excellent
topic for expert systems research.)  They cite

    J. Roach, J.A. Pittman, S.S. Reilly, and J. Savarse, "A Visual
    Design Consultant," Int'l Conf. Cybernetics and Society, Seattle,
    Wash., Oct. 1982.

as a preliminary exploration of this idea.

                                        -- Ken Laws

------------------------------

Date: Fri 9 Mar 84 17:23:30-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert System for Hardware/Software Debugging

The March issue of IEEE Computer has an article by Roger Hartley
of Kansas State University on the CRIB system for fault diagnosis.
The article starts with a discussion of expertise among experts
vs. that among practitioners, and about the process of building
a knowledge base.  Hartley then introduces CRIB and discusses, at
a fairly high level, its application to fault diagnosis in ICL 2903
minicomputers.  He then briefly mentions use of the same hierarchical
diagnostic strategy in debugging the VME/K operating system.

This article is an expanded version of the paper "How Expert Should an
Expert System Be?" in the 7th IJCAI, 1981.

                                        -- Ken Laws

------------------------------

Date: 8 March 1984 1426-est
From: Roz    <RTaylor.5581i27TK @ RADC-MULTICS>
Subject: Expert Systems Production tools

To all who have queried me regarding what info I have or have received on
expert systems production tools...I must apologize.  Have not gotten it
into suitable format as yet;  I am literally behind the power curve
with some new efforts (high visibility) recently assigned to me (approx
4 weeks ago--about the time I could start editing what I have).  I will
post it to the AIList, but unless something helps it won't be before
April.  Unfortuanately, what has already been massaged is in 132 char
[tabular] format and would not post easily to the list that way.  I am
sorry, folks.  But I have not forgotten you.
                                  Roz

------------------------------

Date: 7 Mar 84 19:12:34 PST (Wed)
From: Carl Kaun <ckaun@aids-unix>
Subject: More Laws of Form


Before  I  say anything,  you all should know that I consider myself at  best
naive  concerning formal logic.   Having thus outhumbled myself  relative  to
anyone  who  might answer me and having laid a solid basis for my  subsequent
fumbling around, I give you my comments about Laws of Form.  I do so with the
hope that it stirs fruitful discussion.

First,  as  concerns  notation.   LoF  uses a symbol called at  one  point  a
"distinction"  consisting  of  a  horizontal  bar  above  the  scope  of  the
distinction,  ending  in a vertical bar.   Since I can't reproduce that  very
well  here,  I  will  use parentheses to designate scope where the  scope  is
otherwise ambiguous.  Also, LoF uses a blank space which can be confusing.  I
will  use  an  underline "←" in its place.   And LoF  places  symbols  in  an
abutting  position to indicate disjunction.   I will use a comma to  separate
disjunctive terms.

In  Lof,  the string of symbols " (a)|,  b ",  or equivalently,  " a|,  b" is
equivalent  logically to the statement " a implies b".   The comparison  with
the  equivalent  statement  " (not a) or b" is also obvious.  The "|"  symbol
seems to be used as a postfix unary [negation] operator.  "a" and "b" in  the
formulae  are  either  "←"  or  "←|" or any allowable combination of these in
terms of the constructions available through the finite  application  of  the
symbols "|" and "←".  LoF goes on to talk about this form and what it implies
at some length.   Although it derives some interesting looking formulae (such
as the one for distribution), I could find nothing that cannot be equivalently
derived from Boolean Algebra.

Eventually, LoF comes around to the discussion of paradoxical forms, of which
the  statement  "this sentence is false" is the paradigm.   As I  follow  the
discussion at this point, what one really wants is some new distinction (call
it "i") which satisfies the formula " (i|)|, i".  At least I think it  should
be a distinction, perhaps it should also be considered simply to be a symbol.
The above form purports to represent the sentence "this sentence is false".
The  formulation  in  logic  is similar to the way  one  arrives  at  complex
numbers,  so  LoF also refers to this distinction as being  "imaginary".   At
this  point I am very excited,  I think LoF is going to explore the  formula,
create an algebra that one can use to determine paradoxical forms,  etc.  But
no  development of an algebra occurs.   I played around with this some  years
ago  trying  to get a consistent algebra,  but I didn't really  get  anywhere
(could well be because I don't know what I'm doing).  Lof goes on to describe
the  distinction  "i"  in terms of  alternating  sequences  of  distinctions,
supposedly linking the imaginary distinction to the complex number  generator
exp(ix), however I find this discussion most unconvincing and unenlightening.

Now LoF returns to the subject of distinction again,  describing distinctions
as  circles in a plane (topologically deformable),  where distinction  occurs
when one crosses the boundary of a circle.   In this description,  the set of
distinctions  one can make is firmly specified by the number of circles,  and
the  ways  that circles can include other circles,  etc.   LoF gives  a  most
suggestively  interesting  example of how the topology of the  surface  might
affect  the distinctions,  and even states that different distinctions result
on spheres than on planes, and on toroids than on either, etc.  Unfortunately
he  does not expound in this direction either,  and does not link it  to  his
"imaginary"  form  above,  and I think I might have given up on LoF  at  this
time.   LoF  doesn't  even discuss  intersecting  circles/distinctions.

The  example  that  LoF  gives is of a sphere where one  distinction  is  the
equator,   and   where  there  are  two  additional  distinctions   (circles,
noninclusive  one  of  the  other) in  the  southern  hemisphere.   Then  the
structure  of the distinctions one can make depends on whether one is in  the
northern  hemisphere,  or  in  the southern hemisphere external  to  the  two
distinctions there, or inside one of the circles/distinctions in the southern
hemisphere.   As  I say,  I really thought (indeed think today) that  perhaps
there is some meat to be found in the approach,  but I don't have the time to
pursue it.

I  realize  that  I have mangled LoF pretty  considerably  in  presenting  my
summary/assessment/impressions of it.     This is entirely in accordance with
my expertise established above.   Still,  this is about how much I got out of
LoF.   I found some suggestive ideas,  but nothing new that I (as a  definite
non-logician) could work with.   I would dearly love it if someone would show
me how much more there is.  I suspect I am not alone in this.


Carl Kaun  ( ckaun@AIDS-unix )

------------------------------

End of AIList Digest
********************

∂09-Mar-84  2324	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #29
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Mar 84  23:24:20 PST
Date: Fri  9 Mar 1984 21:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #29
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Mar 1984      Volume 2 : Issue 29

Today's Topics:
  Administrivia - New Osborne Users Group,
  Obituary - A. P. Morse,
  Courses - Netwide AI Course Bites the Dust,
  Seminars - Joint Seminar on Concurrency &
    Programming by Example &
    Mathematical Ontology Seminar Rescheduled &
    Incompleteness in Modal Logic &
    Thinking About Graph Theory
----------------------------------------------------------------------

Date: 3 Mar 84 17:14:40-PST (Sat)
From: decvax!linus!philabs!sbcs!bnl!jalbers @ Ucb-Vax
Subject: Atten:Osborne owners
Article-I.D.: bnl.361

ATTENTION users of Osborne computers.  The Capital Osborne Users Group (CapOUG)
is seeking other Osborne users groups across the country.  If you are a member
of such a group, please send the name of the president, along with an address
and phone number.  We are also looking for contacts via the net (USENET or
ARPA/MILNET) between groups across the country.   If you can be such a contact
or know of someone who can, please send me mail.  All that would be envolved
is sending and recieving  summaries of meetings, parts of newsletters, and
acting as an interface between your group and the other groups 'subscribing' to
this 'mailing list'.  At this point, it is not certain wheather communication
would be through a mail 'reflector', or via a 'digest', however the latter is
most likely.  In return for your service, the CapOUG will exchange our software
library, which consists of over 120 SD disketts, and articles from our
newsletter.  The 'interface' would be asked to offer the like to the other
members of the list.
Even if you don't belong to a group, this would be a great way to find
the group in your area.

                                                        Jon Albersg
                                                ARPA    jalbers@BNL
         (UUCP)...!ihnp4!harpo!floyd!cmc12!philabs!sbcs!bnl!jalbers

------------------------------

Date: Wed 7 Mar 84 22:55:24-CST
From: Bob Boyer <CL.BOYER@UTEXAS-20.ARPA>
Subject: A. P. Morse

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

A. P. Morse, Professor of Mathematics at UC Berkeley, author
of the book "A Theory of Sets," died on Monday, March 5.
Morse's formal theory of sets, sometimes called Kelley-Morse
set theory, is perhaps the most widely used formal theory
ever developed.  Morse and his students happily wrote proofs
of serious mathematical theorems (especially in analysis)
within the formal theory; it is rare for formal theories
actually to be used, even by their authors.  A key to the
utility of Morse's theory of sets is a comprehensive
definitional principle, which permits the introduction of
new concepts, including those that involve indicial (bound)
variables.  Morse's set theory was the culmination of the
von Neumann, Bernays, Godel theory of sets, a theory that
discusses sets (or classes) so "large" that they are not
members of any set.  Morse took delight in making the
elementary elegant.  His notion of ordered pair "works" even
if the objects being paired are too "big" to be members of a
set, something not true about the usual notion of ordered
pairs.  Morse's theory of sets identifies sets with
propositions, conjunction with intersection, disjunction
with union, and so forth.  Through his students (e.g., W. W.
Bledsoe), Morse's work has influenced automatic
theorem-proving.  This influence has shaped the development
of mechanized logics and resulted in mechanical proofs of
theorems in analysis and other nontrivial parts of
mathematics.

------------------------------

Date: Sun, 4 Mar 84 09:00:57 pst
From: bobgian%PSUVAX1.BITNET@Berkeley
Subject: Netwide AI Course Bites the Dust

The "Netwide AI and Mysticism" course I had hoped to offer to all
interested people has become the victim of my overenthusiam and the
students' underenthusiasm.
π
The term here is half over, and student energies and motivations are
YET to rise to the occasion.  I have tried my best, but (aside from a
very select and wonderful few) Penn State students just do not have
what it takes to float such a course.  I am spending most of my time
just trying to make sure they learn SOMETHING in the course.  The
inspiration of a student-initiated and student-driven course is gone.

My apologies to ALL who wrote and offered useful comments and advice.
My special thanks to all who mailed or posted material which has been
useful in course handouts.  I WILL try this again!!  I may give up on
the average Penn State student, but I WON'T give up on good ideas.

I will be moving soon to another institution -- one which EXPLICITLY
encourages innovative approaches to learning, one which EXPLICITLY
appeals to highly self-motivated students.  We shall try again!!

In the meantime, the "Netwide AI course" is officially disbanned.  Those
students here who DO have the insight, desire, and maturity to carry it
on may do so via their own postings to net.ai.  (Nothing I could do or
WANT to do would ever stop them!)  To them all, I say "You are the hope
for the world."  To the others, I say "Please don't stand in our way."

        -- Bob "disappointed, but ever hopeful" Gian...

[P.s.]

Since my last posting (808@psuvax.UUCP, Sunday Mar 4) announcing the
"temporary cessation" of the "Netwide AI and Mysticism" course from Penn
State, I have received lots of mail asking about my new position.  The thought
struck, just AFTER firing that note netwards, that instead of saying

    "I will be moving soon to another institution ...."

I SHOULD have said

    "I will soon be LOOKING for another institution -- one which EXPLICITLY
    encourages innovative approaches to learning, one which EXPLICITLY
    appeals to highly self-motivated students.  We shall try again!!"

That "new institution" might be a school or industrial research lab.  I want
FIRST to leave behind at Penn State the beginnings of what someday could be
one of the finest AI (especially Cognitive Science and Machine Learning)
labs around.  Then I'll start looking for a place more in tune with my
(somewhat unorthodox, by large state school standards) teaching and research
style.

To all who wrote with helpful comments, THANKS.  And, if anybody knows of
such a "new institution", I'm WIDE OPEN to suggestions!!!

        -- Bob "ever hopeful" Gian...

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa:   bobgian%PSUVAX1.BITNET@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET         CSnet:  bobgian@penn-state.CSNET
UUCP:   bobgian@psuvax.UUCP            -or-    allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: Wed 7 Mar 84 18:05:04-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Joint Seminar on Concurrency

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                     JOINT SEMINAR ON CONCURRENCY

                      Carnegie-Mellon University
                           July 9-11, 1984

     The National Science Foundation  (NSF)  of the United States  and
the Science  and  Engineering  Council  (SERC)  of  Great Britain have
agreed to support a Joint Seminar on Concurrency.  The seminar intends
to  discuss the state  of the art in concurrent programming languages,
their  semantics, and the problems of proving properties of concurrent
programs.

     A small number of participants from Britain and the United States
have already  been  invited,  but  other  interested  researchers  are
encouraged to attend.  Because of the limited NSF and SERC funding, no
financial support is  available.  However,  if you  are interested  in
participating and can find your own support, please contact as soon as
possible:

    Stephen D. Brookes                  Brookes@CMU-CS-A
    Department of Computer Science      Home (412) 441-6662
    Carnegie-Mellon University          Work (412) 578-8820
    Schenley Park
    Pittsburgh, PA  15213

     The other organizers of the meeting are Glynn Winskel  (Cambridge
University) and Bill Roscoe (Oxford University), but inquiries  should
be directed to Brookes at Carnegie-Mellon.

------------------------------

Date: 07 Mar 84  1358 PST
From: Terry Winograd <TW@SU-AI.ARPA>
Subject: Programming by Example

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Talkware Seminar (CS 377)

Date: Monday March 12
Speaker: Daniel Halbert (Berkeley & Xerox OSD) and David C. Smith (Visicorp)
Topic: Programming by Example
Time: 2:15-4
Place: 200-205

Most computer-based applications systems cannot be programmed by their
users. We do not expect the average user of a software system to be able
to program it, because conventional programming is not an easy task.

But ordinary users can program their systems, using a technique called
"programming by example". At its simplest, programming by example is
just recording a sequence of commands to a system, so that the sequence
can be played back at a later time, to do the same or a similar task.
The sequence forms a program. The user writes the program -in the user
interface- of the system, which he already has to know in order to
operate the system. Programming by example is "Do what I did."

A simple program written by example may not be very interesting. I will
show methods for letting the user -generalize- the program so it will
operate on data other than that used in the example, and for adding
control structure to the program.

In this talk, I will describe programming by example, discuss current
and past research in this area, and also describe a particular
implementation of programming by example in a prototype of the Xerox
8010 Star office information system.

------------------------------

Date: Thu, 8 Mar 84 14:57 PST
From: BrianSmith.PA@PARC-MAXC.ARPA
Subject: Mathematical Ontology Seminar Rescheduled

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

David McAllester's talk has been rescheduled in both time and space (in
part to avoid conflict with a visit to PARC by the King of Sweden!); I
hope this makes it easier for people to attend.  It will now take place
at 3:30 on Monday in room 3312, instead of at 11:00.

        Title: "MATHEMATICAL ONTOLOGY"

        Speaker: David McAllester (M.I.T.)
        When: Monday March 12th at 3:30 p.m.
        Where: Xerox PARC Executive Conference Room, Room 3312
                (non-Xerox people should come a few moments early,
                 so that they can be escorted to the conference room)

------------------------------

Date: 09 Mar 84  0134 PST
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Incompleteness in Modal Logic

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

Subject: SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
SPEAKER: Johan van Benthem, University of Groningen

TITLE:  "From Completeness Results to Incompleteness
         Results in Modal Logic"

TIME:   Wednesday, Mar. 14, 4:15-5:30 PM
PLACE:  Stanford Mathematics Dept. Room  383-N


  For a long time the main activity in intensional logic
consisted in proving completeness theorems, matching some
logic with some modal class.  In the early seventies,
however, various incompleteness phenomena were discovered -
e.g. such a match is not always possible.  By now, we know that
the latter phenomenon is the rule rather than the exception,
and the issue of the `semantic power' of the possible worlds
approach has become a rather complex and intriguing one.

  In this talk I will give a survey of the main trends in the
above area, concluding with some open questions and partial
answers.  In particular, a new type of incompleteness theorem
will be presented, showing that a certain tense logic defies
semantic modelling even when both modal class and truth
definition are allowed to vary.

------------------------------

Date: 8 Mar 84 12:55:47 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Thinking About Graph Theory

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


             III Seminar on AI and Mathematical Reasoning

          Title:    Thinking About Graph Theory
          Speaker:  Susan Epstein
          Date:     Tuesday, March 13, 1984, 1:30-2:30 PM
          Location: Hill Center, Seventh floor lounge


  Dr. Susan Epstein, a recent graduate of our department, will give an informal
talk based on her thesis work.  Here is her abstract:

       A major challenge in artificial intelligence is to provide computers
    with  mathematical  knowledge  in  a format which supports mathematical
    reasoning.  A recursive formulation is described as the foundation of a
    knowledge representation  for  graph  theory.    Benefits  include  the
    automatic  construction  of  examples and related algorithms, hierarchy
    detection, creation of new properties, conjecture and theorem proving.

------------------------------

End of AIList Digest
********************

∂12-Mar-84  1023	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #30
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Mar 84  10:22:51 PST
Date: Sun 11 Mar 1984 23:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #30
To: AIList@SRI-AI


AIList Digest            Monday, 12 Mar 1984       Volume 2 : Issue 30

Today's Topics:
  AI Tools - Production Systems Tools Request,
  Documentation Tools - Manual Generators,
  Mathematics - Plane vs. Sphere,
  Waveform Analysis - ECG Testing,
  Humor - Connectionist Dog Modeling & Tail Recursion
----------------------------------------------------------------------

Date: 8 Mar 84 14:01:41-PST (Thu)
From: decvax!ittvax!wxlvax!adele @ Ucb-Vax
Subject: Production systems tools
Article-I.D.: wxlvax.248

I'm interested in finding out about tools for doing production
systems work. Does anyone know of any such tools that exist (for example,
but not limited to, syntax directed editors, rule maintenance aids, run time
environments, etc.)?

In the best of all possible worlds, what kinds of tools would you like to
see? I'd appreciate any suggestions, advice, gripes, whatever from people
who've used production systems.

Thanks much!

        Adele Howe

USMail: ITT-ATC                   Tele: (203) 929-7341 Ext.976
        1 Research Dr.            UUCP: decavx!ittvax!wxlvax!adele
        Shelton, CT. 06464

------------------------------

Date: 8 Mar 84 20:02:23-PST (Thu)
From: hplabs!zehntel!dual!fortune!rpw3 @ Ucb-Vax
Subject: Re: Documentation tools - (nf)
Article-I.D.: fortune.2722

Versions of DEC-10 RUNOFF later than about 1977 had a feature called
the "select" character set, which was a hook to the commenting
conventions of your favorite programming lanuages so that RUNOFF input
could be buried in comments in the code. RUNOFF knew enough to look at
the extension of the source file and set the "select" set from that to
the normal defaults. Typically, <comment-char><"+"> turned stuff on,
and <comment-char><"-"> turned it off.

By using the equivalent of "-ms" displays (.DS/.DE) (which I have
forgotten the RUNOFF version of), you could actually include slected
pieces of the code in the document.

It really helped if the language had a "comment through end of line"
character, though you can make do (as in "C") by using some other
character at the front of each line of a multi-line comment.
An example in "C", written as if nroff knew about this feature and
had been told that the "select" char was "*":

        /*+
         *.SH
         *Critical Algorithms
         *.LP
         *The Right←One macro requires two's-complement arithmetic,
         *as it uses the property of the rightmost "one" remaining
         *invariant under negation:
         *.DS B
         *      Right←One(X) = (-X) AND (NOT X)
         *.DE
         *where "-X" is negation (unary minus) and "AND" and "NOT"
         *are full-word bit-wise logical operators.
         *-
         */
        #define Right←One(x) ((-(x))&(~(x)))

This turned out to be very useful in keeping the documentation up-to-date
with the code. In addition, RUNOFF had a /VARIANT:xyz option that allowed
you to have ".IF xyz", etc., in your document, so that one file could
contain the "man" page (.HLP file), the documentation (.DOC), and the
program logic manual (.PLM). You specified the variant you wanted when
you ran it off. RUNOFF itself was the classic example: the source contained
all of the end-user documentation (a bit extreme, I admit!).

Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphin Drive, Redwood City, CA 94065

------------------------------

Date: 11 Mar 1984  23:28 EST (Sun)
From: "Robert P. Krajewski" <RPK%MIT-OZ@MIT-MC.ARPA>
Subject: Manual generators: Lisp systems

One system that allows manual sources to be interspersed with code is LSB, a
system for maintaining large Lisp systems.  (It contains a system definition
facility and tools for grappling with getting code to run in various Lisp
dialects including Maclisp, NIL, and Lisp Machine Lisp.)  LSB will
``compile'' manuals for either TeX or Bolio (a Lisp document processor that
looks like the *roff family).

My wish list for a large system maintenance program would allow for the
generation of manuals, reference cards, and online documents of various
formats from the same source.  Are there any other packages for other
languages that will do this (or at least the subset that LSB offers) ?

``Bob''

------------------------------

Date: 9 Mar 84 6:25:57-PST (Fri)
From: ihnp4!houxm!hou2g!stekas @ Ucb-Vax
Subject: Plane = Sphere ?
Article-I.D.: hou2g.194

>                                                       In fact, the
>plane and the sphere are topologically equivalent (the plane is a sphere
>if infinite radius) ...

This statement has been made so frequently that I think it's time someone
took exception.  A plane and sphere are NOT topologically equivalent, a
sphere has an additional point. That's why plane like coordinate systems
mapped to a sphere always have a point where the coordinates are undefined.

In any case, spherical and planar maps can both be colored with the same
number of colors.

                                                  Jim

------------------------------

Date: 9 Mar 84 7:47:10-PST (Fri)
From: harpo!ulysses!unc!mcnc!ecsvax!jwb @ Ucb-Vax
Subject: Re: computer ECG, FDA testing of AI programs
Article-I.D.: ecsvax.2140

An extract of a previous submission by me mentioned the overreading of an
ECG interpretation by a cardiologist.  What I meant (and what was not clear)
is that the cardiologist is looking at the raw ECG, not the output of the
computer (although a lot of preprocessing is often done which is hidden
from the cardiologist--this is another different problem--at least it looks
like what you would get from a standard ECG machine).  On a related issue,
medical decisions regarding the treatment of an individual patient *have* to
be made by the local physician treating the patient (at least that is long
standing medical practice and opinion).  The overreading offered by the
remote services is a look at the reconstructed input ECG by a Board
Certified Cardiologist and is intended to be analagous to a "consultation"
by a more experienced and/or specialized physician.  The name of the service
is Telemed, not Telenet as I incorrectly typed.  Committees of the American
Heart Association and the American College of Cardiology are attempting to
set standards for computer (and human) interpretation of ECG's.  A snag is
that different preprocessing of the ECG's by different manufacturers makes
it rather uneconomical to acquire a large number of "standard" ECG's in
machine readable form.  I think the FDA is looking at all this and I think
under current law they can step in at their whim.  So far they seem to be
waiting for the above groups to present standards (since they don't seem to
have the resources to even start to develop them within the FDA).

        Jack Buchanan
        Cardiology and Biomedical Engineering
        UNC-Chapel Hill
        Chapel Hill NC
        decvax!mcnc!ecsvax!jwb

------------------------------

Date: 9 Mar 84 7:43:40-PST (Fri)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: Re: Connectionist Dog Modeling
Article-I.D.: rocheste.5532

  From seismo!harpo!decvax!decwrl!rhea!orphan!benson Fri Mar  2 20:24:18 1984
  Date: Thursday,  1 Mar 1984 13:45:43-PST
  From: seismo!decvax!decwrl!rhea!orphan!benson
  Subject: Re: Seminar Announcement


                                                             29-Feb-1984


     Garrison W. Cottrell
     University of Cottage Street
     55 Cottage Street
     Rochester, New York 14608



     Dear Mr. Cottrell:

     Although  I  was  unable  to  attend  your  recent  seminar,   "New
     Directions  in  Connectionist  Dog  Modeling,"  I  am  compelled to
     comment on your work as presented in your  published  works,  along
     with the new ideas briefly discussed in the seminar announcement.

     Having read your "Dog:  A Canine  Architecture"  in  late  1981,  I
     approached  "Toward  Connectionist Dog Modeling" the following year
     with cautious optimism.  The former work encouraged me that perhaps
     a consistent dog model was, in fact, obtainable;  at the same time,
     it caused me to wonder why it was desirable.   Nontheless,  "Toward
     Connectionist  Dog  Modeling"  proved  to  be  a  landmark  in this
     emerging science, and my resulting enthusiasm quieted those nagging
     suggestions of futility.

     You may not be familiar with my work in  the  field  of  artificial
     ignorance,  which,  I  would  like to suggest, shares several goals
     with your own work, with different emphasis.  "Artificial Ignorance
     -  An  Achievable  Goal?" (Benson 79) was the first of my published
     papers on the subject.  Briefly, it promoted the idea that although
     creation  of  an  "artificially  intelligent"  machine  is a worthy
     scientific goal, design  and  implementation  of  an  "artificially
     ignorant"   one  is  much  more  sensible.   It  presented  several
     arguments  supporting  the  notion  that,  compared  to  artificial
     intelligence,  artificial  ignorance  is  easily achievable, and is
     therefore the logical first step.

     As a demonstration of the power of  artificial  ignorance  (AI),  I
     spent  the latter half of 1979 producing CHESS1, a chess system for
     the VAX-11/780.  CHESS1 was written primarily in LISP,  a  language
     of   my   own   invention   (Language   for   Ingorance  Simulation
     Programming).  In a resounding victory, CHESS1  lost  to  even  the
     most  ignorant  human  players, being unable to distinguish between
     the pieces.  CHESS2, a more sophisticated implementation  completed
     in  April of 1980, lost just as effectively by moving the pieces in
     a clockwise progression around the edge of the board.

     Ignored by overly ambitious, grant-hungry  researchers,  artificial
     ignorance  seemed to become my own personal discipline.  After only
     three issues, the fledgling SIGIGN newsletter was discontinued, and
     the special interest group it served was disbanded.



     Undaunted, I published a series of three papers in 1980.  The first
     two  described several techniques I had developed toward simulating
     ignorant behavior ("Misunderstanding Human  Speech",  and  "Pattern
     Misidentification",  Benson  80).   The  third  presented  a simple
     conversion method for producing artificially ignorant programs from
     artificially  intelligent  ones,  using  a  heuristic bug insertion
     algorithm ("Artificial Brain Damage", Benson 80).

     Despite these technical triumphs,  interest  in  AI  seemed  to  be
     dwindling.   By  the  spring  of  1981,  I, too, had lost interest,
     convinced that  my  AI  research  had  been  little  more  than  an
     interesting intellectual exercise.

     It is for this reason that your dog modeling thesis  so  thoroughly
     captured  my  interest.   Surely  the  phrases  (to quote from your
     announcement) "impoverished phoneme," "decimated world  view,"  and
     "no  brain"  imply  "ignorance." And, if I may paraphrase from your
     original treatise, the generic dog is essentially the equivalent of
     an intellectually stunted human who has been forced to bear fur and
     eat off the floor.

     Clearly dog modeling and AI have  much  in  common.   To  prove  my
     point, I have simulated the Wagging Response in a LISP application,
     and am working toward  a  procedural  representation  of  the  Tail
     Chasing Activity.  The latter is a classic demonstration of genuine
     ignorance,  as  well  as  a  natural   application   of   recursive
     programming techniques.

     I welcome any suggestions you have on these experiments,  and  look
     forward to the continued success of your dog modeling research.



                                        Sincerely,

                                           Tom Benson

------------------------------

Date: 9 Mar 84 7:45:25-PST (Fri)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: Re: Tail Recursion is Iterative
Article-I.D.: rocheste.5535

  Date: Thursday,  8 Mar 1984 18:57:59-PST
  From: decvax!decwrl!rhea!orphan!benson
  Subject: Re: Tail recursion.  Please forward to Mr. Sloan.


Dear Mr. Cottrell:

  I do realize that in most cases (I.E., everyday programming), tail recursion
can be reduced to iteration. However, in my study of this aspect of dog
modeling, I found the underlying MOTIVATION to be recursive in nature. Clearly
this is not a concept which can be applied to programming outside the AI realm.
(And when I say "AI", I of course mean "AI", not "AI"). My canine subject did
not set out to chase his tail for i equals 1 to n. Nor did he intend to chase
it until some condition was met; the obvious condition being "has the tail
been caught?" In fact, frequent experiments showed that actual tail capture
did not necessarily end the cycle, and it often was not achieved at all before
cessation of the chasing activity. No, a more realistic model is one in which
a bored or confused dog initiates an attempt to catch his tail. During this
process, the previously unseen tail falls into view as the head is turned.
The dog's suspicion is aroused; is this some enemy preparing to strike? This
possibility causes an attempt to catch the tail. This causes the tail to fall
into view....   and so on. The recursion may be terminated either by some
interrupt generated by an unrelated process in the dog's brain, or by forced
intervention of the dog's master. The latter is dangerous, and should be
scrupulously avoided, because it does not allow the dog's natural unwinding
mechanism to be invoked. Thus, the dog may carry unnecessary Tail Chasing
Activity procedure frames around in his brain for years, like a time bomb
waiting to go off. This, indeed, is a subject deserving further study.
  In response to your other question: you are welcome to post my AI reports
wherever it seems appropriate.


                                        Tom Benson

------------------------------

Date: 9 Mar 84 20:48:42-PST (Fri)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: sloan's reply to benson's reply to sloan's reply to benson's reply
Article-I.D.: rocheste.5551

  Date: Fri, 9 Mar 1984  14:28 EST
  From: SLOAN%MIT-OZ@MIT-MC.ARPA
  Subject: tail recursion: forwarded reply

Gary-
 Of course, Mr. Benson knows that ALL time bombs are, by definition,
waiting to go off.
 As to the essentially recursive nature of TCA, I simply note that
this view requires a stack of dogs; in my experience stacks of dogs
engage in an entirely different form of behavior, which, under the
proper parity conditions, is truly recursive.
-Ken


[If anyone is Really Tired of This, I will stop sending this rather
convoluted conversation between two friends of mine who don't know
each other but apparently should -gwc]

------------------------------

End of AIList Digest
********************

∂13-Jan-85  1624	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #31
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Jan 85  16:24:30 PST
Mail-From: LAWS created at 13-Mar-84 16:35:02
Date: Tue 13 Mar 1984 16:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #31
To: AIList@SRI-AI
ReSent-date: Sun 13 Jan 85 16:23:50-PST
ReSent-From: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-To: YM@SU-AI.ARPA


AIList Digest           Wednesday, 14 Mar 1984     Volume 2 : Issue 31

Today's Topics:
  Humor - Who is Tom Benson?,
  Linguistics - And as a Non-Logical Conjuction,
  Brain Theory - Parallelism,
  Seminars - Procedural Learning (Boston) &
    Theorem Proving in the Stanford Pascal Verifier,
  Conference - 4th Conf. on FST&TCS,
  Review - Western Culture in the Computer Age
----------------------------------------------------------------------

Date: Mon 12 Mar 84 18:48:05-PST
From: ROBINSON@SRI-AI.ARPA
Subject: Tom Benson

Who is this Tom Benson and what is he doing out of
captivity?

------------------------------

Date: 12 Mar 84 11:32:17-PST (Mon)
From: hplabs!hao!seismo!rochester!stuart @ Ucb-Vax
Subject: And as a non-logical conjuction - Request for pointers
Article-I.D.: rocheste.5580

From: Stuart Friedberg  <stuart>
I am looking for pointers into the linguistic and natural language
processing literature concerning the use of "and" in English as a
non-logical conjunction. That is, the use of "and" often implies
temporal sequence and/or causality. There is also a use introducing
a verb complement.
        "Sheila took the ball and ran with it."
        "The lights were off and I couldn't see."
        "I will try and find the missing book."

I understand that treatment of conjunction and ellipsis is difficult.
Pointers to books, articles, theses, diatribes, etc. that (have sections
that) deal with "and" in this extra-logical sense will be *much* more
useful than pointers to more general treatments of conjunction and
ellipsis.

Useful things to know that I don't:
        What are (all?) the senses in which "and" may be used?
        Do all these interpretations apply to clause conjunction
                only? (Ie, not to noun conjunction, adverb
                conjunction, etc.)
        What knowledge is needed/useful to determine the sense of
                "and" in a given English sentence? (Given a knowledge
                of all the senses of "and", how to we eliminate some
                of them in a particular context?)
        Is it possible to expand Ross's constraints in a reasonable
                way to handle this kind of conjuction? (Constraints
                on variables in syntax, thesis, MIT, 1967, etc.)

I have a few pointers already, but my only real linguistic source is
several years old. I assume that additional work has been done from
both linguistic and AI points of view. I am starting from:

1) Susan F. Schmerling, "Asymmetric Conjuction and Rules of Conversation",
in Syntax and Semantics, Vol. 3 (Speech Acts), Cole and Morgan (eds.),
Academic Press, New York, 1975

2) Stan C. Kwasny, "Treatment of Ungrammatical and Extra-grammatical
Phenomena in Natural Language Understanding Systems", Indiana University
Linguistic Club, Bloomington, IN, 1980

                                Stu Friedberg
                        {seismo, allegra}!rochester!stuart      UUCP
                                stuart@rochester                ARPA

                                Dept. Computer Science          MAIL
                                University of Rochester
                                Rochester, NY 14627

------------------------------

Date: Tue, 13 Mar 84 18:05 EST
From: Ives@MIT-MULTICS.ARPA
Subject: Brain Theory - Parallelism


A strikingly clear picture of brain parallelism at the gross anatomical
level was presented during a lecture at MIT on the architecture of the
cerebral cortex by a neuroanatomist (Dr.  Deepak Pandya, Bedford
Veterans Administration Hospital, Bedford, MA).

Almost a hundred years ago, dye studies showed that the cerebral cortex
is not a random mass of neurons, and it was mapped into a few dozen
areas, differentiated by microstructure.  Later, it was shown that
lesions in a certain area always produced the same behavioral
deficiencies.  Now, they have mapped out the interconnections between
the areas.  The map looks like a plate of spaghetti but, when
transformed into a schematic, reveals simplicity and regularity.

Each half of the brain includes six sets of areas.  Each set has a
somatic area, a visual area and an auditory area.  Each area in a set
connects to the other two, forming a triangle.  The six sets form a
stack because each area is connected to the area of the same kind in the
next set.  The eighteen areas schematicized by this simple triangular
stack include most of the tissue in a cerebral cortex.

If I remember correctly, all mammals have this architecture.  It was
surmised that one set evolved first and was replicated six times,
because the neuronal microstructure varies gradually with increasing
level.  He also suggested that higher levels might process higher levels
of abstraction.

-- Jeffrey D.  Ives

------------------------------

Date: 12 Mar 1984  13:39 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Procedural Learning (Boston)

            [Forward from the MIT bboard by SASW@MIT-MC.]

         Wednesday, March 14     4:00pm   8th floor playroom


          Acquisition of Procedural Knowledge from Examples

                            P. M. Andreae

  I will describe NODDY - a system that acquires procedures from
examples.  NODDY is a variation of concept learning in which the
concepts to be learned are procedures in the form of simple robot
programs.  The procedures are acquired by generalising examples
obtained by leading a robot through a sequence of steps.  Three
distinct types of generalisation are involved: structure
generalisation (eg. loops and branches), event generalisation (eg. the
branching conditions), and function induction.
  I will also discuss two principles that arise out
of, and are illustrated by, NODDY.  I claim that these principles have
application, not only to procedure acquisition, but also to any system
that does partial matching and/or generalisation of any kind.

------------------------------

Date: Mon 12 Mar 84 19:16:47-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Theorem Proving in the Stanford Pascal Verifier

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The Automatic Inference Seminar will meet on Wednesday March 14th in MJH 352
(note change of room from 301) at 1:30 p.m.
(This is tax-filing season;  I'm getting slightly too many groanworthy remarks
about "automatic deduction", hence the name change).

Speaker:  Richard Treitel (oh no, not again)

Subject:  Theorem Proving in the Stanford Pascal Verifier

The Stanford Pascal Verifier was developed in the late 1970's for research in
program verification.   Its deductive component, designed mainly by Greg Nelson
and Derek Oppen, has some features not found in many other natural deduction
systems, including a powerful method for dealing with equalities, a general
framework for combining the results of decision procedures for fragments of the
problem domain, and a control structure based on an unusual "normal form" for
expressions.   I will attempt to explain these and relate them both to other
provers and to post-Oppen work on the same technology.

------------------------------

Date: 9 Mar 84 9:52:21-PST (Fri)
From: harpo!ulysses!burl!clyde!akgua!psuvax!narayana @ Ucb-Vax
Subject: Call for 4th Conf. FST&TCS Bangalore India Dec 13-15.
Article-I.D.: psuvax.815

Subject: Call for papers

              4th Conference on Foundations of Software
              Engineering and Theoretical Computer Science

              Bangalore, INDIA,  DECEMBER 13-15, 1984

Sponsor: Tata Institute of Fundamental Research, Bombay, India.

Conference advisory committe:

A.Chandra(IBM), B.Chandrasekharan(Ohio state), S.Crespi Reghizzi(Milan)
D.Gries(Cornell), A.Joshi(Penn), U.Montanari(Pisa), J.H.Morris(CMU),
A.Nakamura(Hiroshima), R.Narasimhan(TIFR), J.Nivergelt(ETH), M.Nivat(Paris)
R.Parikh(Ney York), S.R.Kosaraju(Johns Hopkins), B.Reusch(Dortmund),
R.Sethi(Bell labs), S.Sahni(Minnesota), P.S.Tiagarajan(Aarhus),
W.A.Wulf(Tartan labs).

Papers are invited in the following areas:

       Programming languages and systems
       Program correctness and proof methodologies
       Formal semantics and specifications
       Theory of computation
       Formal languages and automata
       Algorithms and complexity
       Data bases
       Distributed computing
       Computing practice

Papers will be REFEREED and a final selection will be made by the programme
committe.

Authors should send four copies of each paper to

       Chairman, FST&TCS Programme Committe
       Tata Institute of Fundamental Research
       Homi Bhabha Road, BOMBAY, 400 005, India

Due date for receiving full papers: MAY 31, 1984.

Authors will be notified of acceptance/rejection by: JULY 31,1984

Camera ready papers must be submitted by: SEP 15,1984

PROCEEDINGS WILL BE PUBLISHED.  For further details contact the above address.

Programme Committe: M.Joseph(TIFR), S.N.Maheswari(IIT), S.L.Mahindiratta(IIT),
                    K.V.Nori(Tata RDDC), S.V.Rangaswamy(IISC), R.K.Shyamasundar
                    (TIFR), R.Siromani(Madras Christian college).

------------------------------

Date: 12 Mar 84  2053 PST
From: Frank Yellin <FY@SU-AI.ARPA>
Subject: Western Culture in the Computer Age

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

n066  1529  12 Mar 84
BC-BOOK-REVIEW Undated
(Culture)
By CHRISTOPHER LEHMANN-HAUPT
c.1984 N.Y. Times News Service
    TURING'S MAN. Western Culture in the Computer Age. By J. David
Bolter. 264 pages. University of North Carolina Press. Hard-cover,
$19.95; paper, $8.95.
    J. David Bolter, the author of ''Turing's Man: Western Culture in
the Computer Age,'' is both a classicist who teaches at the
University of North Carolina and a former visiting fellow in computer
science at Yale University. This unusual combination of talents may
not qualify him absolutely to offer a humane view of the computer
age, or what he refers to as the age of Turing's man, after Alan M.
Turing, the English mathematician and logician who offered early
theoretical descriptions of both the computer and advanced artificial
intelligence.
    But his two fields of knowledge certainly provide Bolter with an
unusual perspective on contemporary developments that many observers
fear are about to usher in an age of heartless quantification, if not
the final stages of Orwellian totalitarianism. In Bolter's view,
every important era of Western civilization has had what he calls its
''defining technology'' which ''develops links, metaphorical or
otherwise, with a culture's science, philosophy, or literature; it is
always available to serve as a metaphor, example, model, or symbol.''
    To the ancient Greeks, according to Bolter, the dominant
technological metaphor was the drop spindle, a device for twisting
yarn into thread. Such a metaphor implied technology as a controlled
application of power. To Western Europe after the Middle Ages, the
analogues to the spindle were first, the weight-driven clock, a
triumph of mechanical technology, and later, the steam engine, a
climax of the dynamic. In Bolter's subtly developed argument, the
computer - obviously enough the present age's defining metaphor - is
an outgrowth of both the clock and the steam engine. Yet,
paradoxically, the computer also represents a throwback.
    Everything follows from this. In a series of closely reasoned
chapters on the way in which the computer has redefined our notions
of space, time, memory, logic, language and creativity, Bolter
reviews a subtle but recuing pattern in which the computer
simultaneously climaxes Western technology and returns us to ancient
Greece. He concludes that if the ancient ideals were balance,
proportion and handicraft (the use of the spindle), and the Western
European one was the Faustian quest for power through knowledge
(understanding a clockwork universe to attain the dynamism of the
steam engine), then Turing's man combines the two.
    ''In his own way, computer man retains and even extends the Faustian
tendency to analyze,'' Bolter concludes. ''Yet the goal of Faustian
analysis was to understand, to 'get to the bottom' of a problem,''
whereas ''Turing's man analyzes not primarily to understand but to
act.''
    He continues: ''For Turing's man, knowledge is a process, a skill,''
like the ancient arts of spinning or throwing a pot. ''A man or a
computer knows something only if he or it can produce the right
answer when asked the right question.'' Faustian depth ''adds nothing
to the program's operational success.''
    Now in portraying Turing's man, Bolter may seem to be overburdening
a few simple metaphors. Yet his argument is developed with remarkable
concreteness. Indeed, if his book has any fault, it lies in the
extent to which he has detailed the slightly repetitious and
eventually predictable pattern of argument described above.
    Yet what is far more important about ''Turing's Man'' is its success
in bridging the gap between the sciences and the humanities. I can
only guess at how much it will inform the computer technologist about
philosophy and art, but I can vouch for how much it has to say to the
nonspecialist about how computers work. The inaccessibility of the
computer's inner functioning may well be a key to the author's case
that Turing's man is returning to the ancient Greek's satisfaction in
the surface of things, but after reading Bolter's book, this reader
found the computer far less mysterious. Not incidentally, the book
makes us understand why computers aren't really all that good at
doing mathematics (they can't get a grip on the notion of infinity);
and it far surpasses Andrew Hodges's recent biography of Alan Turing
in explaining Turing's Game for testing artificial intelligence.
    But most provocative about this study is what it has to say about
the political implications of the computer age. Will Turing's man
prove the instrument of Orwell's Big Brother, as so many observers
are inclined to fear? Very likely not, says Bolter:
    ''Lacking the intensity of the mechanical-dynamic age, the computer
age may in fact not produce individuals capable of great good or
evil. Turing's man is not a possessed soul, as Faustian man so often
was. He does not hold himself and his world in such deadly earnest;
he does not speak of 'destiny' but rather of 'options.' And if the
computer age does not produce a Michelangelo and a Goethe, it is
perhaps less likely to produce a Hitler or even a Napoleon. The
totalitarian figures were men who could focus the Faustian commitment
of will for their ends. What if the will is lacking? The premise of
Orwell's '1984' was the marriage of totalitarian purpose with modern
technology. But the most modern technology, computer technology, may
well be incompatible with the totalitarian monster, at least in its
classic form.''
    Indeed, according to Bolter, Turing's man may be more inclined to
anarchy than to totalitarianism. This may be whistling past the
graveyard. But in Bolter's stimulating analysis, it also makes a kind
of homely sense.

------------------------------

End of AIList Digest
********************

∂16-Mar-84  1247	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #32
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Mar 84  12:46:14 PST
Date: Fri 16 Mar 1984 10:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #32
To: AIList@SRI-AI


AIList Digest            Friday, 16 Mar 1984       Volume 2 : Issue 32

Today's Topics:
  AI Books - Request for Sources & New Series Announcement,
  Fuzzy Set Theory - Request for References,
  Bindings - Request for Tong Address,
  Humor - Man-LISP Interface,
  AI Tools - Review of IQLISP for IBM PC,
  Linguistics - Nonlogical "And",
  Waveform Analysis - ECG Interpretation Liability,
  Alert - High Technology Articles,
  Seminars - Knowledge-Based Documentation Systems &
    Sorting Networks & Expert Systems for Fault Diagnosis
----------------------------------------------------------------------

Date: 14 Mar 84 0118 EST
From: Dave.Touretzky@CMU-CS-A
Subject: who sells AI books?

I am looking for places that sell AI books, other than publishers.  Do
you know of any book distributors that specialize in AI titles?  How about
book clubs featuring AI, cog. sci., robotics, and the like?  Send names
and addresses to Touretzky@CMUA.  I'll make the listing available online
if there's any demand for it.

[The best source is probably Synapse Books.  Does anyone have the address?

The Library of Computer and Information Science is a book club that often
offers AI books, and sometimes offers vision and popularized cognitive science
or robotics books.  Right now you can get a great deal on the Handbook of AI.
See Scientific American or the latest IEEE Computer for details, or do a
current member a favor by letting him sign you up.  -- KIL]

------------------------------

Date: Wed 14 Mar 84 16:41:17-PST
From: DKANERVA@SRI-AI.ARPA
Subject: New Book Series

         [Forwarded from the CSLI newsletter by Laws@SRI-AI.]

    MANUSCRIPTS SOLICITED FOR NEW MIT PRESS/BRADFORD BOOKS SERIES

     MIT Press/Bradford  Books has  announced  a new  series  entitled
"Computational Models of Cognition and Perception" edited by Jerome A.
Feldman, Patrick J. Hayes, and David E. Rumelhart.

     The  series  will  include  state-of-the-art  reference works and
monographs, as well as upper  level texts,  on computational models in
such  subject  domains as  knowledge representation,  natural language
understanding, problem  solving,  learning and  generalization,  motor
control, speech perception and production, and all areas of vision.

     The series will span  the full range  of computational models  in
cognition and  perceptual research  and teaching,  including  detailed
neural models, models based on symbol-manipulation languages, and mod-
els employing techniques of formal logic. Especially welcome are works
treating experimentally  testable  computational  models  of  specific
cognitive and  perceptual  functions; basic  computational  questions,
particularly relationships between  different classes  of models;  and
representational questions linking computation  and semantics to  par-
ticular problem domains.

     Manuscript  proposals  should  be  submitted  to  one  the  three
editors, or to Henry Bradford Stanton, Publisher, Bradford Books,  The
MIT Press,  28 Carleton  Street  Cambridge, MA  02142  (617-253-5627).
However, we  welcome  your discussing  ideas  for books  and  software
programs and  packages  with  any  of the  members  of  the  Editorial
Advisory Board who may be your close colleagues:

        John Anderson                   Drew McDermott
        Horace Barlow                   Robert Moore
        Jon Barwise                     Allen Newell
        Emilio Bizzi                    Raymond Perrault
        John Seely Brown                Roger Schank
        Daniel Dennett                  Candy Sidner
        Geoffrey Hinton                 Shimon Ullman
        Stephen Kosslyn                 David Waltz
        Jay  McClelland                 Robert Wilensky
                                        Yorick Wilks

------------------------------

Date: 14 Mar 84 14:08:57 PST (Wednesday)
From: Conde.PA@PARC-MAXC.ARPA
Subject: AIList : Request for Fuzzy Set references

I would like to know if anyone has a references to good introductory
books on   the theory of fuzzy sets, as well as fuzzy databases.

Please reply to me or to the digest.

Thanks,
Daniel Conde

------------------------------

Date: 12 Mar 84 19:25:20-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: Tong Colloquium on Knowledge-Directed Search
Article-I.D.: uiucdcs.6150

Could someone tell me where the speaker [Christopher Tong] is to be
contacted? I'd like to follow up on his work.


                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign

[The talk, on knowledge-aided circuit design, was given at Rutgers.
Does anyone have Tong's net or mail address? -- KIL]

------------------------------

Date: Tue 13 Mar 84 13:19:16-CST
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Wait till he discovers the parenthesis key!

        [Forwarded from the UTEXAS-20 bboard by Laws@SRI-AI.]

[FYAmusement: The following item was found on SAIL's Bboard, contributed by
Ron Newman. --CBD]


The following letter to the editor was published in Softalk of March,
1984:

  I have come into possession recently of a program called Microlisp.  I
understand that it has been around for some time, so maybe someone out
there knows something about it.  I cannot get it to do anything but
print numbers I type in or print the word "nil".  How do I make it do
anything else?  Can you give me an example of something useful that I
might be able to do with it?

					[...]

------------------------------

Date: 9 Mar 84 8:51:58-PST (Fri)
From: decvax!genrad!wjh12!vaxine!chb @ Ucb-Vax
Subject: Review of IQLISP for IBM PC
Article-I.D.: vaxine.211

        A review of IQLisp (by Integral Quality, 1983).

                Compiled by Jeff Shrager
                    CMU Psychology
                      7/27/83


[Charlie has forwarded Jeff Schrager's review of IQLISP for the IBM PC.
This appeared in AIList in early August, so I will not reprint it here.
Readers who want the text can FTP file <AILIST>IQLISP.TXT on SRI-AI or
contact AIList-Request@SRI-AI.  -- KIL]


                                   Charlie Berg
                                   Automatix, Inc.
                                   ...allegra!linus!vaxine!chb

------------------------------

Date: Tue 13 Mar 84 20:12:39-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Use of "and"

My father, who is a law professor, was able to come up with an instance where a
contract contained the word "and" in a certain place, and the court interpreted
that "and" to mean what we computermen would have meant by "or".   Or maybe it
was the other way around;  I forget the details.
                                                        - Richard

------------------------------

Date: Wed, 14 Mar 84 09:26:36 EST
From: John McLean <mclean@NRL-CSS>
Subject: nonlogical "and"

  I think that the first treatment of the fact that "and" in English is
not the purely logical "and" of predicate calculus appeared in Philosophy
literature.  You might want to take a look at Strawson's PHILOSOPHICAL LOGIC
for classical arguments that the "and" of natural language is distinct from the
logician's "and" and Grice's "William James' Lectures" for a very influential
rebuttal in which he argues that the use of "and" in English can be modeled
by logical conjunction if we take into account "conversational implicature",
a concept Grice develops in the lectures.

  By the way, one of my favorite examples of nonlogical conjunction which you
do not mention is the statement made to someone about to eat a mushroom
growing in the ground "You will not eat that and live."  This statement
is almost always correct from the truth-functional view of conjunction
even if the mushroom is harmless, since few people when issued the warning
will indulge their appetite.
                            Good luck,
                            John

------------------------------

Date: 13 Mar 84 9:27:44-PST (Tue)
From: harpo!ulysses!unc!mcnc!ecsvax!jwb @ Ucb-Vax
Subject: More computer ECG, Cardiologist
Article-I.D.: ecsvax.2153

With respect to the responsibility of a cardiologist to overread every
ECG, this is (or should be) uniformly done.  The problem addressed by
the dial up services is that in the middle of the night, in a small
town hospital, the cardiologist's reading may not come until the next
day.  Many communities do not have a cardiologist at all.  The
physician obtaining the ECG (who in an emergency room is typically NOT
a cardiologist) has an obligation to carefully examine the ECG
obtained.  There are two schools of thought with regard to sending
computer ECG information to a physician who may not be expert in
interpreting an ECG.  One is that any information is better than none,
and therefore the nonexpert physician should get the information.  The
other is that if there is a screw up, and the local physician cannot
be trusted to recognize this, the computer analysis can do significant
harm and should be witheld (The local physician will ALWAYS have his
own interpretation.)  Both approaches have their merit.  Our local
approach is to NOT send machine interpretations back to the Emergency
Room until a person with some expertise in reading ECG's has looked at
the tracing and at the computer generated interpretation.  In some
cases, this approach negates the major advantage of having the
computer in the first place.

[...]
Jack Buchanan
Cardiology and Biomedical Engineering
UNC-Chapel Hill
decvax!mcnc!ecsvax!jwb  {Usenet}

------------------------------

Date: Thu 15 Mar 84 22:46:08-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: High Technology, Feb 84 has AI-relevant articles

               Summary of HIGH TECHNOLOGY, Feb 84
          ============================================

FEATURES
        BIOCHIPS: CAN MOLECULES COMPUTE?  The groundwork is being laid for
        21st-century computers based on carbon rather than silicon.  Molecular
        Switches. Soliton Switches and Logic. Bulk Molecular Devices. Analog
        Biochips. "Intelligent" Switches. Robot Vision. Fabrication. Protein
        Engineering. Development Strategy. written by Jonathan Tucker

        UNCOVERING HIDDEN FLAWS. Nondestructive tests spot trouble before it
        happens.  Computerized tomography.  6 techniques dominate.

        ENGLISH: THE NEWEST COMPUTER LANGUAGE. Natural language systems.
        Computational Linguistics. commercial applications. semantic grammars.
        Syntax, Semantics, vs. Pragmatics. Situation Semantics.

        BIOPOLYMERS CHALLENGE PETROCHEMICALS.  Oil-recovery agents, drug
        purification media, and plastics are promising applications.


OPINION
        Where defense can be cut
LETTERS
        Data Security; Helping kids learn; Retraining
UPDATE
        Graphics Analysis. converting a 3-Dmodel into finite element model
        Russians develop electromagnetic casting; licensed by Alcoa
        DNA sequence DB. (longer than 50 mucleotides) GenBank has 2700+ entries
                comprising over 2.1 million bases
        Optical memory units boost computer storage. ST with 4 gigabytes on a
                single 14 inch removable platter. 3 Mbyte/sec transfer rate
                costing $130,000.  Shugart offers 1Gbyte on 12" platter with
                5Mbyte/sec transfer costing $6,000 in quantities of 250
        In-mold metal plating of plastics cuts costs.
        Brain chemicals delivered on demand (an experimental method)
INSIGHTS
        Factory Automation Survival Kit
MILITARY/AEROSPACE
        Mosaic arrays boost infrared surveillance
CONSUMER
        Multi-decoders may revive AM stereo
BUSINESS
        Optical memories eye computer market
MICROCOMPUTER
        Micro publicity game
BOOK REVIEW
        Luciano Caglioti: The 2 Faces of Chemistry. [ on chemical risks ]
INVESTMENTS
        Big Potential for Custom Chip Suppliers

------------------------------

Date: Tue 13 Mar 84 10:23:55-EST
From: Renata J. Sorkin <RENATA@MIT-XX.ARPA>
Subject: Knowledge-Based Documentation Systems

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

               "KNOWLEDGE-BASED COMMUNICATION PROCESSES
                       IN SOFTWARE ENGINEERING"

                          Matthias Schneider
                            Project Inform
                       University of Stuttgart


        Designing programs to solve ill-structured problems as well as
trying to understand the purpose of a program and the designer's
intentions involves a great deal of communication between programmers
and users.  Program documentation systems must support these
communication processes by supplying a common base of knowledge and
structuring the exchange of information.

DATE:  Wednesday, March 14
TIME:  12:00 noon
PLACE: NE43-453
Host: Dr. A. diSessa

------------------------------

Date: 15 March 1984 16:58-EST
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: Sorting Networks

            [Forward from the MIT bboard by Laws@SRI-AI.]

DATE:   Tuesday, March 20, 1984
TIME:   3:45pm   Refreshments
        4:00pm   Lecture
PLACE:  NE43-512a
TITLE:  "Sorting Networks"
SPEAKER:        Professor Michael Paterson, University of Warwick

Last year, Ajtai, Komlos and Szemeredi published details of a depth O(log n)
comparator network for sorting, thus answering a longstanding open problem.
Their construction is difficult to analyse and the bounds they proved result in
networks of astronomical size.  A considerable simplification is presented
which readily yields constructions of more moderate size.

HOST:   Professor Tom Leighton

------------------------------

Date: 15 Mar 84 13:53:27 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: III Seminar on Expert Systems for Fault Diagnosis...

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                                 I I I SEMINAR

          Title:    An Expert System for Fault Monitoring and Diagnosis

          Speaker:  Kathy Abbott

          Date:     Tuesday, March 27, 1984, 1:30-2:30 PM
          Location: Hill Center, Seventh floor lounge

  Kathy  Abbott,  a Ph.D. student in our department, will give an informal talk
describing her research work at NASA.  Here is her abstract:

       The Flight Management Branch  at  NASA/Langley  Research  Center  in
    Hampton,Va.  is exploring the use of AI concepts to aid flight crews in
    managing aircraft systems. Under this research effort, an expert system
    is  being developed to perform on-board fault monitoring and diagnosis.
    Current expert systems technology is insufficient for this application,
    because the flight domain consists of dynamic physical systems and  the
    system  must respond in real time. A frame-based expert system has been
    designed that includes a  frame  associated  with  each  subsystem  and
    sensor  on  the  aircraft.  Among other information, the frames include
    mechanism models of the associated systems that  can  be  used  by  the
    diagnostic expert for hypothesis verification and predictive purposes.

------------------------------

End of AIList Digest
********************

∂18-Mar-84  2328	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #33
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 18 Mar 84  23:27:15 PST
Date: Sun 18 Mar 1984 21:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #33
To: AIList@SRI-AI


AIList Digest            Monday, 19 Mar 1984       Volume 2 : Issue 33

Today's Topics:
  AI Books - Synapse Books,
  Bindings - Tong Address,
  Mathematics - Topology of Plane and Sphere,
  Expert Systems - Explanatory Capability,
  Automata - Characterizing Automata from I/O Pairs,
  Conferences - ACM Conference & CSCSI 84 Preliminary Program
----------------------------------------------------------------------

Date: Sun 18 Mar 84 21:53:54-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Synapse Books

I found a copy of the 1982 Synapse Books catalog.  The address is

  Synapse Information Resources, Inc.
  912 Cherry Lane
  Vestal, New York  13850

The catalog covers AI, automation, biomedical engineering, CAD/CAM,
robotics, instrumentation, cybernetics, and computer technology.
Prices seem to be the publishers' suggested prices, although I only
checked a couple.  The selection is impressive.

                                        -- Ken Laws

------------------------------

Date: Fri 16 Mar 84 21:31:54-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Tong Address

Chris Tong can be reached at TONG@SUMEX or TONG@PARC.  Mailing address:
Chris Tong, Xerox Palo Alto Research Center, 3333 Coyote Hill Rd., Palo
Alto, CA

--Tom

[Jeff Rosenschein@SUMEX reports that Chris hasn't used his Sumex
login for quite a while.  Richard Treitel@SUMEX suggested a
TONG@PARC-MAXC address.  -- KIL]

------------------------------

Date: 18 Mar 84 20:45:24 PST (Sun)
From: Tod Levitt <levitt@aids-unix>
Subject: more four color junk

   From: ihnp4!houxm!hou2g!stekas @ Ucb-Vax
   A plane and sphere are NOT topologically equivalent, a
   sphere has an additional point."

More to the "point", the topological invariants of the plane and the
(two-) sphere are different, which is the definition of being
topologically inequivalent. For instance, the plane is contractible to a
point while the sphere is not; the plane is non-compact, while the
sphere is compact; the homotopy and homology groups of the plane are
trivial, while those of the sphere are not.

A more general form of the four-color theorem asks the question: for a
given (n-dimensional) shape (and its topological equivalents) what is
the fewest number of colors needed to color any map drawn on the
shape.

------------------------------

Date: 9 Mar 84 8:58:08-PST (Fri)
From: decvax!linus!utzoo!watmath!deepthot!julian @ Ucb-Vax
Subject: Re: computer ECG, FDA testing of AI programs
Article-I.D.: deepthot.212

As a matter of human engineering, I think "expert" programs for
practical use must be prepared to explain the reasoning followed
when they present recommendations.   Computer people ought to be
well aware of the need to provide adequate auditing and verification
of program function, even if the naive users don't know this.
The last thing we need is 'expert' computers that cannot be
questioned.  I think Weizenbaum had a valid point when he wrote
about programs that no one understood.  And I would be unhappy
to see further spread of computer systems that the human users cannot
feel themselves to be in charge of, especially when the programs
are called 'intelligent' and the technology for answering these
questions about the reasoning processes is fairly well established.
                Julian Davies

------------------------------

Date: 16 Mar 84 13:28:54 PST (Friday)
From: Bruce Hamilton <Hamilton.ES@PARC-MAXC.ARPA>
Reply-to: Hamilton.ES@PARC-MAXC.ARPA
Subject: Characterizing automata from I/O pairs

The following recent msgs should be of interest to this list, and
hopefully will stimulate some good discussion.  --Bruce

                              ----------

From: Ron Newman <Newman.es>

The following letter to the editor was published in Softalk of March,
1984:

  I have come into possession recently of a program called Microlisp.  I
understand that it has been around for some time, so maybe someone out
there knows something about it.  I cannot get it to do anything but
print numbers I type in or print the word "nil".  How do I make it do
anything else?  Can you give me an example of something useful that I
might be able to do with it?

                                        [...]

                              ----------

From: Bruce Hamilton <Hamilton.ES>

Actually, the letter implies a serious question, related to trying to
communicate with other forms of intelligent life: is there an approach,
to giving inputs and observing output to an unknown program, which is in
some sense optimal; i.e. leads to a complete characterization of input -
output pairs in the shortest possible time?

--Bruce

                              ----------

From: VanDuyn.Henr

One question is would intelligent life aquire (a.k.a. pirate or steal) a
piece of software without the documentation.

On the serious side, what you suggest reminds me of programs that
attempt to write programs by examining a small set of the input output
pairs.  At first sample pairs are fed to the program then the program
begins generating its own sample pairs to build and validate a
hypothesis.  I read an article about this is the ACM TOPLAS journal
about 3 years ago...

Mitch

                              ----------

From: stolfi.pa

"Is there an approach, to giving inputs and observing output to an
unknown program, which is in some sense optimal; i.e. leads to a
complete characterization of input - output pairs in the shortest
possible time?"

I am interested in that question, too. Do you know of any work in that
area? I have given some thought to it, but made only trivial progress.

To be more definite, consider deterministic finite machine with N
internal states, and {0,1} as its input and output alphabets. The goal
is to determine the structure of the machine (i.e., its transition and
output functions) by feeding it a sequence of zeros and ones, and
observing the bits that come out of it. Nothing is known about the
structure of the machine. In particular, it is not known how to reset
the machine to its initial state, and not even whether it is possible to
do so (i.e., whether the machine is strongly connected). Then

(1) at best, you will be able to know the structure of a single strongly
connected component of the machine, and have only a vague knowledge of
the path that led from the initial state to that component. Moreover,
your answer will be determined only up to automaton equivalence. (In
other words, studing the behavior of something will only tell you how
that thing behaves, not how it is built)

(2) if you have an upper bound on the number N of internal states, I
believe you can always deduce the structure of the machine, subject to
the caveats in (1), after feeding it some finite number f(N) of bits.
However, I have no algorithm for generating the required input and
analyzing the output, and I have no idea on how big f(N) is. O(N) is a
trivial lower bound. Any upper bounds? Can it be more than O(2↑N)?.

(3) In any case, note that a finite machine built from n boolean gates
may have an exponential number of states (For example, a counter with n
flip-flops has 2↑n states). Therefore, even if you know that a program
has a single 16-bit integer variable and a 16-bit program counter, you
may need to feed it few billion bits to know what it does.

(4) if you do not have an upper bound on N, there is no way you can
deduce it by experiment, or answer categorically any interesting
questions about the structure of the machine. For example, suppose you
have put in 999,999,999 bits, and you got that many zeros out. You still
don't know whether the machine is a trivial one-state, constant-output
gadget, or whether it is a billion-state monster that ignores its inputs
and simply puts out a '1' every billionth transition. Note however, that
you may still give answers of the form "either this machine has more
than X states, or it is equivalent to the following automaton: ..."

In anthropomorphic terms, (4) says that it is impossible to distinguish
a genuinely dumb being from a very intelligent one that is just playing
dumb. Point (3) makes me wonder if the goal and method of psychology --
to understand the human mind by studying how it behaves -- is a sensible
proposition after all..

jorge

[There have, of course, been investigations of pattern recognition
techniques for infering Markov or finite-state grammars.  The
PURR-PUSS system is one that comes to mind.  Applications not
mentioned above include cryptography, data compression, fault
diagnosis, and prediction (e.g., stock market directions).  Martin
Gardner had a fun SciAm column ~13 years ago about building an
automaton for predicting the operator's heads/tails choices.  Gardner
also popularized the game of Elusius, in which players try to
elucidate laws of nature by pulsing the system with test cases.
The Mastermind game is related, although you are given information
about the internal state as part of the output.  Several AI
researchers have used automata theory for investigating hypothesis
formation and learning.  -- KIL]

------------------------------

Date: Tue 13 Mar 84 12:46:54-EST
From: Neena Lyall <LYALL@MIT-XX.ARPA>
Subject: ACM Conference

             [Forwarded from the MIT bboard by Laws@SRI-AI.]

        "INTEGRATING THE INFORMATION WORKPLACE THE KEY TO PRODUCTIVITY"

                       ACM NORTHEAST REGIONAL CONFERENCE
                              19 - 21 March, l984
                             University of Lowell
                                  Lowell, MA
                       Special Student/Faculty Rate $20

KEYNOTE SPEAKERS ARE:

Monday          George McQuilken, Chairman, Spartacus Computer Inc., "Mainframe
                Technology in Integrated Systems"

Tuesday         Carl  Wolf,  President,  Interactive  Data Corp., "Bridging the
                Mainframe to Micro Gap"

Wednesday       Mitch  Kapor,  President,  Lotus  Development   Corp.,   "Micro
                Technology in Integrated Systems"

CLOSING PLENARY SESSION:
                Thomas F. Gannon (5th Generation, DEC)
                Maurice V. Wilkes (Corporate Research, DEC)
                Frederick  G.  Withington  (V.P., ADL), "Integrating the Pieces
                - Computing in the 90's"

THE TRACK CHAIRMEN ARE:

Applications Technology Track
                Dr.  David  Prerau,   Principal   of   Technical   Staff,   GTE
                Laboratories, Inc.

Artificial Intelligence Track
                Jeffrey  Hill,  Manager of Development, Artificial Intelligence
                Corp.

                Dr.  David  Prerau,   Principal   of   Technical   Staff,   GTE
                Laboratories, Inc.

CAD/CAM & Robotics Track
                Cary Bullock, V.P., Engineering & Operations, Xenergy Corp.

Computer Tools & Techniques Track
                David Hill, Director, Data Systems & Communications

Database Management Track
                Michael Stadelmann, Manager of Development, GE/MIMS Systems

Decision Support Systems Track
                David   Kahn,   Manager,   Decision   Support   Systems,   Wang
                Laboratories

Networking & Data Communications Track
                Dr.  Barry  Horowitz,  V.P.   Bedford   Operations,   (Formerly
                Technical Director, Strategic Systems), MITRE Corp.

Office Automation Track
                Nancy Heaton, Manager of Office Automation, Wang Laboratories

Personal Computing Track
                Michael Rohrbach, International Market Resources

THERE ARE TWO TUTORIALS WHICH RUN IN PARALLEL WITH THESE SESSIONS:

Artificial Intelligence Tutorial (3 days)
                Dr. Eugene Charniak, Brown University.

                AI  and its newest developments, emphasizing expert systems and
                knowledge-based systems.

Networking Technology Tutorial (3 days)
                Stewart Wecker, Pres. Technology Concepts.

                Local  area   and   other   network,   including   theory   and
                manufacturers'  current  products  (IBM's  SNA,  DECNET and LAN
                products)

For  detailed  information  see bulletin board outside Room 515, 545 Technology
Square, Cambridge or call either 617/444-5222: Box  C,  or  617/271-3268:  Shim
Berkovits.

------------------------------

Date: 8 Mar 84 10:34:06-PST (Thu)
From: harpo!utah-cs!sask!utcsrgv!utai!tsotsos @ Ucb-Vax
Subject: CSCSI 84 Preliminary Program
Article-I.D.: utai.129

The preliminary program for the Fifth National Conference of the Canadian
Society for Computational Studies of Intelligence follows.
Registration or other information may be obtained from:

Prof. Michael Bauer,
Local Arrangements Chair, CSCSCI/SCEIO-84
Dept. of Computer Science,
University of Western Ontario
London, Ontario, Canada
N6A 5B7
(519)-679-6048

Due to unfortunate circumstances beyond our control, there has been a
date change for the conference which has not been reflected in
several current announcements. The correct date is May 15- 17, 1984.



                            CSCSI-84

                      Canadian Society for
              Computational Studies of Intelligence

                    Fifth National Conference

                           May 15 - 17
                  University of Western Ontario
                      London, Ontario, Canada


                       PRELIMINARY PROGRAM


Tuesday Morning, May 15

8:30 - 8:40     Introduction and Welcome

Session 1  -  Natural Language

8:40 - 9:40     Martin Kay (XEROX PARC) - Invited Lecture
9:40 - 10:10    "A Theory of Discourse Coherence for Argument Understanding"
                Robin Cohen (U of Toronto) (Long paper)
10:10 - 10:30   "Scalar Implicature and Indirect Responses in
                     Question-Answering"
                Julia Hirschberg (U of Pennsylvania) (Short paper)

10:30 - 10:40   BREAK

10:40 - 11:00   "Generating Non-Direct Answers by Computing Presuppositions
                   of Answers, Not of Questions or Mind your P's, not your Q's"
                Robert Mercer, Richard Rosenberg (U of British Columbia)
                (Short paper)
11:00 - 11:20   "Good Answers to Bad Questions: Goal Deduction in Expert
                     Advice-Giving"
                Martha Pollack (U of Pennsylvania) (Short paper)


Session 2  -  Cognitive Modelling and Problem Solving


11:20 - 11:40   "Using Spreading Activation to Identify Relevant Help"
                Adele Howe (ITT), Timothy Finin (U of Pennsylvania)
                (Short paper)
11:40 - 12:00   "Managing Time Maps"
                Thomas Dean (Yale) (Short paper)


12:00 - 1:30    LUNCH


Tuesday Afternoon, May 15

Panel Discussion

1:30 - 2:45    "The Artificial Intelligence, Robotics and Society Program"
                    of the Canadian Institute for Advanced Research
    Panel members : Zenon Pylyshyn - moderator (U of Western Ontario)
            Raymond Reiter - coordinator for the University of British Columbia
            John Mylopoulos - coordinator for the University of Toronto
            Steven Zucker - coordinator for McGill University
            Nick Cercone - president CSCSI/SCEIO


Session 3  -  Computer Vision I


2:45 - 3:45    "Optical Phenomena in Computer Vision"
               Steven Shafer (CMU) - Invited Lecture

3:45 - 4:00    BREAK

4:00 - 4:30    "Procedural Adequacy in an Image Understanding System"
               Jay Glicksman (Texas Instruments) (Long paper)
4:30 - 5:00    "The Local Structure of Image Discontinuities in One Dimension"
               Yvan Leclerc (McGill) (Long paper)
5:00 - 5:30    "Receptive Fields and the Reconstruction of Visual Informatiom"
               Steven Zucker (McGill) (Long paper)



Wednesday Morning, May 16


Session 4  -  Robotics


8:30  -  9:30   "Robotic Manipulation"
                Matthew Mason (CMU)  -  Invited Lecture
9:30  - 10:00   "Trajectory Planning Problems, I: Determining Velocity
                     Along a Fixed Path"
                Kamal Kant (McGill) (Long paper)
10:00 - 10:20   "Interpreting Range Data for a Mobile Robot"
                Stan Letovsky (Yale) (Short paper)

10:20 - 10:45   BREAK


Panel Discussion

10:45 - 12:00   "What is a valid methodology for judging the quality
                    of AI research?"

                Panel Moderator : Alan Mackworth (U of British Columbia)

12:00 - 1:30    LUNCH


Wednesday Afternoon, May 16

Session 5  -  Learning

1:30 - 2:00     "The Use of Causal Explanations in Learning"
                David Atkinson, Steven Salzberg (Yale) (Long paper)
2:00 - 2:30     "Experiments in the Automatic Discovery of Declarative
                     and Procedural Data Structure Concepts"
                Mostafa Aref, Gordon McCalla (U of Saskatchewan) (Long paper)
2:30 - 3:00     "Theory Formation and Conjectural Knowledge in Knowledge Bases"
                James Delgrande (U of Toronto) (Long paper)
3:00 - 3:20     "Conceptual Clustering as Discrimination Learning"
                Pat Langley, Stephanie Sage (CMU) (Short paper)

3:20 - 3:40     BREAK

3:40 - 4:00     "Some Issues in Training Learning Systems and an
                     Autonomous Design"
                David Coles, Larry Rendell (U of Guelph) (Short paper)
4:00 - 4:20     "Inductive Learning of Phonetic Rules for Automatic
                     Speech Recognition"
                Renato de Mori (Concordia University)
                Michel Gilloux (Centre National d'Etudes des
                        Telecommunications, France)
                (Short paper)

4:20 - 4:30     BREAK


Session 6  -  Computer Vision II


4:30 - 5:00   "Applying Temporal Constraints to the Problem of Stereopsis
                   of Time-Varying Imagery"
              Michael Jenkin (U of Toronto) (Long paper)
5:00 - 5:30   "Scale-Based Descriptions of Planar Curves"
              Alan Mackworth, Farzin Mokhtarian
              (U of British Columbia) (Long paper)


Wednesday Evening, May 16  -  BANQUET



Thursday Morning, May 17


Session 7  -  Logic Programming


8:30  -  9:30   J. Alan Robinson (Syracuse U)  -  Invited Lecture
9:30  -  9:50   "Implementing PROGRAPH in Prolog: An Overview of the
                     Interpreter and Graphical Interface"
                P. Cox, T. Pietrzykowski (Acadia U) (Short paper)
9:50  - 10:10   "Making 'Clausal' Theorem Provers 'Non-Clausal'"
                David Poole (U of Waterloo) (Short paper)
10:10 - 10:30   "Logic as Interaction Language"
                Martin van Emden (U of Waterloo) (Short paper)

10:30 - 10:45   BREAK


10:45 - 12:00
Report of the CSCSI/SCEIO Survey on AI Research in Canada

           Nick Cercone - President CSCSI/SCEIO
           Gordon McCalla - Vice-President CSCSI/SCEIO


12:00 - 1:00   LUNCH

Thursday Afternoon, May 17


Session  8  -  Expert Systems and Applications


1:00 - 2:00   Ramesh Patil (MIT)  -  Invited Lecture
2:00 - 2:20   "ROG-O-MATIC: A Belligerent Expert System"
              Michael Mauldin, Guy Jacobson, Andrew Appel, Leonard Hamey (CMU)
              (Short paper)
2:20 - 2:40   "An Explanation System for Frame-Based Knowledge Organized
                   Along Multiple Dimensions"
              Ron Gershon, Yawar Ali, Michael Jenkin (U of Toronto)
              (Short paper)
2:40 - 3:00   "Qualitative Sensitivity Analysis: A New Approach to Expert
                   System Plan Justification"
              Stephen Cross (Air Force Institute of Technology) (Short paper)

3:00 - 3:20   BREAK


Session  9  -  Knowledge Representation


3:20 - 4:20    "A Fundamental Trade-off in Knowledge Representation
                    and Reasoning"
               Hector Levesque (Fairchild R&D)  Invited Lecture
4:20 - 4:50    "Representing Control Strategies Using Reflection"
               Bryan Kramer (U of Toronto) (Long paper)
4:50 - 5:10    "Knowledge Base Design for an Operating System
                    Expert Consusltant"
               Stephen Hegner (U of Vermont),
               Robert Douglass (Los Alamos National Laboratory) (Short paper)
5:10 - 5:30    "Steps Towards a Theory of Exceptions"
               James Delgrande (U of Toronto) (Short paper)


5:30 - 5:45    CLOSING REMARKS

------------------------------

End of AIList Digest
********************

∂22-Mar-84  1127	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #34
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Mar 84  11:25:54 PST
Date: Thu 22 Mar 1984 10:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #34
To: AIList@SRI-AI


AIList Digest           Thursday, 22 Mar 1984      Volume 2 : Issue 34

Today's Topics:
  Corporate AI - Entry Route Request,
  AI Documents - HAKMEM Request,
  Inference - Identifying Programs,
  Fuzzy Sets - Reference,
  Computer Art - Computer Manipulated Novel,
  Expert Systems - Computer EKG's,
  AI Funding - Strategic Computing in the New York Review of Books,
  Public Service - Tax Info,
  Seminars - RUBRIC: Intelligent Information Retrieval &
    Computational Linguistics &
    Expert System for Building Expert System Rules
  Course Announcement - Lisp: Language and Literature
----------------------------------------------------------------------

Date: 19 Mar 84 19:06:44-PST (Mon)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: IBM vs. HP: research (AI) question
Article-I.D.: dartvax.922

I have been offered entry-level positions at both Hewlett-Packard and
IBM.  I feel that, sooner or later, I'd like to do research in some AI-
related field, and I'd like any comments you may have as to the
accessibility of the research labs to an employee starting out as a
programmer in either company.  I don't want to start a ridiculous
discussion of the overall merits of and/or problems with HP and IBM;
many articles have been written on both.  But things can change quickly
and there may be some work being done of which I'm not presently aware.
I'd appreciate any impressions, subjective or otherwise, that you may
have.  I hold an A.B. in Computer Science from Dartmouth.

      --Lorien Y. Pratt
        Dartmouth College Library
        Hanover, NH  03755

        decvax!dartvax!lorien

------------------------------

Date: 10 Mar 84 18:13:23-PST (Sat)
From: pur-ee!ecn-ee!davy @ Ucb-Vax
Subject: Copies of HAKMEM? - (nf)
Article-I.D.: pur-ee.1672

This has already been asked in UNIX-WIZARDS, I thought I'd ask it here
too.  Does anyone have a copy of HAKMEM (MIT Memo from Feb. 1972) they'd
be willing to Xerox?  I've heard there's an online copy at MIT-MC or
someplace -- anyone know where it's at?

--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue

[I have a copy of this memo (AIM 239, HAKMEM by M. Beeler, R.W. Gosper,
and R. Schroeppel).  It is a collection of notes by MIT hackers on about
20 different topics.  The document is about 100 pages long and includes
figures.  Does an online copy exist?  -- KIL]

------------------------------

Date: Mon 19 Mar 84 23:50:08-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Identifying programs

"Algorithmic Program Debugging" by Ehud Shapiro, MIT Press includes
substantial discussion of the question of identifying programs
from I/O pairs. Of course in general the identification is not
exact. Concepts of asymptotic identification ("identification in
the limit") are used instead. A lot of this work has been
developed to try to pin down the concept of "learnable language".
There are a number of recent papers on this question by Scott
Weinstein (University of Pennsylvania) and others, in the journal
Information and Control. If anyone is interested, I'll dig out
the references.

-- Fernando Pereira

------------------------------

Date: 19 Mar 1984 20:56:34-PST
From: don%brandeis.csnet@csnet-relay.arpa
Subject: Fuzzy Sets

I have a reference for Daniel Conde who requested information about
Fuzzy Sets on a recent AI bulletin board:

        Fuzzy Sets and Systems: Theory and Applications

        Didier Dubois & Henri Prade
        copywrite 1980
        Academic Press

                                Don Ferguson

------------------------------

Date: 19 Mar 84 11:36:10 CST (Mon)
From: ihnp4!houxa!homxa!rem
Subject: Computer Manipulated Novel

For people interested in computer-aided art, I manipulated a small,
unpublished novel of mine called ABRACADABRA a few years ago.  The
book is a mystery derived from childhood experiences in St. Louis.
I call the manipulated book ABRACADABRA CADAVER.  Chapter-by-chapter
I wrote UNIX shell programs to alter the text according to its con-
tents: for example, in an early chapter I misspelled all words as
a child might do.  In another I inserted German proverbs appropriate
to my father's speech in all of his conversations.  Another repeats
key phrases again and again, in a minimalist way; another puts all
dialog into footnotes; another, where the mystery unfolds, cryptically
reverses the sentences throughout--and so on.  After editing the
end results, I came up with a Joycean-like book that is quite
readable and interesting as a literary document.  I no longer
have it on-line, but if anyone is interested, I can provide
more details.  And, of course, if anyone knows of a publisher
crazy enough.....

Bob Mueller

BELLCORE
Holmdel, NJ

------------------------------

Date: 11 Mar 84 11:01:10-PST (Sun)
From: harpo!ulysses!burl!clyde!akgua!mcnc!ecsvax!hsplab @ Ucb-Vax
Subject: Computer EKG's
Article-I.D.: ecsvax.2145

One reason why computerized EKG's have become so popular in the medical
environment is that **most** of the EKGs are performed on normal people
and are being used as a screening process.  This means that if a computer
program is very good at differentiating between normals and abnormals
without any other capability (not true with current programs), it will
probably do better than 90%.  It is for this reason that a cardiologist
overview is used primarily to catch gross errors and to refine problems
associated with pathological cases.  In a study done by Bailey at the
NIH in the early 1970's, most computer programs actually did rather well,
and if you removed interpretation differences which were common among
cardiologists and tested the programs on grossly abnormal cases, they
were able to achieve better than 60%-70% accuracy.

David Chou
Department of Pathology
University of NC, Chapel Hill
    !decvax!mcnc!ecsvax!hsplab

------------------------------

Date: 20 Mar 84 11:46:20 PST (Tuesday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Strategic Computing in the New York Review of Books

The March 15, 1984 issue of The New York Review of Books contains an
article entitled "The Costs of Reaganism", which mentions DARPA's
Strategic Computing Program as an example of misdirected U.S. economic
and budgetary policy.  The article is by Emma Rothschild, who teaches in
the Science, Technology, and Society program at MIT and is the author of
"Paradise Lost: The Decline of the Auto-Industrial Age".

  ...What does it mean for America's future economic growth that
  69 percent of federally supported research and development is
  for military purposes, an increase since 1981 of $18.1 billion
  in the military function and of $0.6 billion in non-military
  functions? [21]

    Does it matter for the character of America's scientific
  institutions that the Defense Advanced Research Projects
  Agency's new "strategic computing" program is in the process
  of transforming academic computer science?[22]  Does it
  matter for American competitiveness that Japan's ten-year
  program on the cognitive, linguistic, and engineering
  foundations of computing will be civilian, while America's
  will be concerned with robot reconnaissance vehicles,
  radiation-resistant wafers, and missile defenses, with
  "speech recognition" in the "high-noise, high-stress environment
  [of] the fighter cockpit," and with "voice  distortions due
  to the helmet and face mask"? [23]  Mr. Reagan's principal
  opponents are not asking these questions; they are questions
  about the militarization of the political life, the scientific
  potential, and the economic society of the richest country in
  the world.

  [21] "Special Analyses", Budget of the United States Government,
    FY 1985, p. K-30.
  [22] The program is described in Weinberger's Annual Report, p. 263,
    and also in the Defense Advanced Research Projects Agency's
    own study "Strategic Computing" (DARPA, October 28, 1983).
    In this study DARPA explains that it intends to use contract
    personnel from industry as well as university researchers, in
    order to "avoid a dangerous depletion of the university
    computer science community":  "The magnitude of this national
    effort could represent a very large perturbation to the
    university community" (p. 64)
  [23] DARPA, "Strategic Computing", pp. 34-35.

------------------------------

Date: Wed 21 Mar 84 14:05:12-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Tax-Free Support vs. Income Averaging

Bob Boyer of UTexas-20 posted a bboard message about IRS policy on
tax-free student fellowships.  This isn't AIList material, but it
will be of interest to many students on and off the Arpanet, so I
am making it available for those who want to post it at their sites.
I have copied the message to file <AIList>IRS.TXT on SRI-AI, and will
send copies to interested people who can't FTP the file.  I have
also included related messages from others who read the original.

The content is roughly this: if you claim that your current academic
support is tax-free (or if the IRS makes that claim), and if such income
is at least 50% of your support, you will probably not be able to income
average during the next four years.  This is very likely to cost you more
than the tax you save on your current fellowship or other support.

                                        -- Ken Laws

------------------------------

Date: 19 Mar 84 14:23:22 PST (Monday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 3/22

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                Richard M. Tong
                Advanced Information and Decision Systems


        RUBRIC: An Intelligent Aid for Information Retrieval


In this talk I will describe an ongoing research project that is
concerned with developing a computer based aid for information retrieval
from natural language databases. Unlike other attempts to improve upon
Boolean keyword retrieval systems, this research concentrates on
providing an easily used rule-based language for expressing retrieval
concepts. This language draws upon work in production rule systems in AI
and allows the user to construct queries that give better precision and
recall than more traditional forms.

The talk will include a discussion of the main elements in the system
(which is written in LISP and C), the key research issues (including
some comments on the important role that uncertainty plays) and some
man-machine interface questions (in particular, the problem of providing
knowledge elicitation tools).


Thursday, March 22, 1984        4:00 pm

*** Please note the location change ***

Hewlett-Packard
1651 Page Mill Road
Palo Alto, CA
28C Lower Auditorium

Be sure to arrive at the building's lobby on time, so that you may be
escorted to the meeting room

------------------------------

Date: 19 Mar 1984  15:26 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Computational Linguistics (BOSTON)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Wednesday, March 21     4:00pm      8th floor playroom

De-mystifying Modern Grammatical Theory and Artificial Intelligence
Robert Berwick

It has frequently been suggested that modern linguistic theory is
irreconcilably at odds with a ``computational'' view of human
linguistic abilities.  In fact, linguistic theory provides a rich
source of constraints for the computationalist.  In this talk I will
outline some of the key changes in grammatical theory from the mid 60's to
the present day that support this claim, and at the same time try to
dispel a number of myths:

Myth: Modern grammars are made up of large numbers of rules that
one cannot ``implement.''

Myth: Modern grammars are not relevant to computational models
of language processing.

Myth: Knowledge that you can order hamburgers in restaurants
aids *on-line* syntactic processing.

------------------------------

Date: 20 Mar 84 11:30:18 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Experiments with Rule Writer for EXPERT

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                                 I I I SEMINAR

          Title:    Experiments with Rule Writer for EXPERT
          Speaker:  George Drastal
          Date:     Tuesday, April 3, 1984, 1:30-2:30 PM
          Location: Hill Center, Seventh floor lounge


  George  Drastal,  a Ph.D. student in our department, will describe his thesis
reseach in an informal talk.  His abstract:

       Results are presented of some experiments with Rule  Writer,  an  AI
    system  that  assists  knowledge  engineers  with  the  task of writing
    inference rules  for  a  medical  consultation  system  in  the  EXPERT
    formalism.    Rule  Writer  (RW) is used primarily in an early stage of
    expert system development, to generate a prototype rule base.   RW  may
    also   be   used  as  a  testbed  for  experimenting  with  alternative
    organizations   of   expert   knowledge   in   the   EXPERT   knowledge
    representation.

------------------------------

Date: Wed, 21 Mar 84 18:38 PST
From: BrianSmith.PA@PARC-GW.ARPA
Reply-to: BrianSmith.PA@PARC-GW.ARPA
Subject: Course Announcement -- Lisp: Language and Literature

         [Forwarded from the SRI CLSI bboard by Laws@SRI-AI.]

The following course will be the CSLI Seminar on Computer Languages
for the Spring Quarter [at Stanford].  If you are interested in attending,
please read the notes on dates and registration, at the end.

                Lisp: Language and Literature

A systematic introduction to the concepts and practices of programming,
based on a simple reconstructed dialect of LISP.  The aim is both to
convey and to make explicit the programming knowledge that is
typically acquired through apprenticeship and practice.  The material
will be presented under a linguistic reconstruction, using vocabulary
that should be of use in studying any linguistic system.  Considerable
hands-on programming experience will be provided.

Although intended primarily for linguists, philosophers, and
mathematicians, anyone interested in computation is welcome.  In
particular, no previous exposure to computation will be assumed.
However, since we will aim for rigorous analyses, some prior familiarity
with formal systems is essential.  Also, the course will be more like a
course in literature and creative writing, than like a course in, say,
French as a second language.  The use of LISP, in other words, will
be primarily as a vehicle for larger issues, not so much an object of
study in and of itself.  Since LISP (unlike French) is really very
simple, we will be able to teach it in class and lab sessions.  Tutorial
instruction and some individual programming assistance will be provided.

Topics to be covered include:

   -- Procedural and data abstraction;
   -- Objects, modularity, state, and encapsulation;
   -- Input/output, notation, and communication protocols;
   -- Meta-linguistic abstraction, and problems of intensional grain;
   -- Architecture, implementation, and abstract machines;
   -- Introspection, self-reference, meta-circular interpreters, and reflection.

Throughout, we will pay particular attention to the following themes:

   -- Procedural and declarative notions of semantics;
   -- Interpretation, compilation, and other models of processing;
   -- Implicit vs. explicit representation of information;
   -- Contextual relativity, scoping mechanisms, and locality;
   -- Varieties of language: internal, external, theoretical;
   -- Syntax and abstract structure: functionalism & representationalism.

Organizational Details:

   Instructor: Brian C. Smith, Xerox PARC/Stanford CSLI; 494-4336 (Xerox);
      497-1710 (Stanford), "BrianSmith@PARC" (Arpanet).

   Classes: Tuesdays and Thursdays, 2:00 - 3:30, in Room G19, Redwood
      Hall, Jordan Quad.

      NB:  Since we will be using the computers just now being installed
      at CSLI, there may be some delay in getting the course underway.
      In particular, it is possible that we will not be able to start until
      mid-April.  A follow-up note with more details will be sent out as
      soon as plans are definite.

   Registration: Again, because of the limited number of machines, we
      may have to restrict participation somewhat.  We would therefore
      like anyone who intends to take this course to notify Brian Smith
      as soon as possible.  Note that the course will be quite demanding:
      10 to 20 hours per week will probably be required, depending on
      background.

   Sections: As well as classes, there will be section/discussion periods
      on a regular basis, at times to be arranged at the beginning of the
      course.

   Reading: The course will be roughly based on the "Structure and
       Interpretation of Computer Programs" textbook by Abelson and
       Sussman that has been used at M.I.T., although the linguistic
      orientation will affect our dialects and terminology.

   Laboratory: Xerox 1108s (Dandelions) will be provided by CSLI, to be
      used for problem sets and programming assignments.  Instructors &
      teaching assistants will be available for assistance at pre-arranged
      times.

   Credit: The course may be listed as a special topics course in Computer
      Science.  However (in case that does not work out) anyone wishing
      to take it for credit should get in touch, so that we can arrange
      reading course credit.

------------------------------

End of AIList Digest
********************

∂26-Mar-84  1241	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #35
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Mar 84  12:39:01 PST
Date: Mon 26 Mar 1984 11:08-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #35
To: AIList@SRI-AI


AIList Digest            Monday, 26 Mar 1984       Volume 2 : Issue 35

Today's Topics:
  AI Tools - Nature of AI Computing,
  Logic Programming - Inferential and Deductive Processing,
  Expert Systems - VLSI Knowledge Acquisition & Explanatory Capability,
  Mathematics - Four Color Theorem,
  System Identification - Characterizing Automata From I/O Pairs,
  Seminars - Compositional Temporal Logic & Logic Programming,
  Course - Expert Systems for CAD/CAT
----------------------------------------------------------------------

Date: Thu 22 Mar 84 15:08:19-EST
From: Sal Stolfo <sal@COLUMBIA-20.ARPA>
Subject: A call for discussion

             "The Numericists Meet the Symbolicists and Ask Why?"

With   the  recent  interest  in  Fifth  Generation  Computing  and  Artificial
Intelligence, many scientists with backgrounds in other  disparate  fields  are
beginning to study symbolic computation in a serious manner.

The  ``parallel  architectures  community'' has mostly been interested in novel
computer architectures to accelerate numeric computation  (usually  represented
as  Fortran  codes).    Similarly,  the ``data base machine community" has been
interested in more conventional data processing (for example, large-scale  data
bases).   Now that the interest of these communities and others are focusing on
Artificial Intelligence computing, a question that is often asked is ``What are
the fundamental characteristics of AI computation that distinguish it from more
conventional computation"?  Indeed, are there really any differences at all?

These questions have no simple answers; they can be viewed from many  different
perspectives.    This  note  is  a  solicitation of the AI community for cogent
discussion of this issue.  We hope that all facets will be addressed including:

   - Differences between the kinds of problems encountered in AI and those
     considered more conventional.   (A   simple   answer   in   terms  of
     ``ill-defined'' and ``well-defined'' problems is viewed as a copout.)

   - Methodological differences  between  AI  computing  and  conventional
     computing.

   - Computer resource  requirements  and  programming  environments  with
     technical  substantiations  of  the differences rather than aesthetic
     preferences.

I expect to collect responses from the AI community and produce a final  report
which will be made available to any interested parties.

Thank you in advance.

Salvatore  J. Stolfo
Assistant  Professor
Computer Science Department
Columbia University

------------------------------

Date: 24 Mar 1984 00:11:35-PST
From: hildum%brandeis.csnet@csnet-relay.arpa
Subject: Inferential and Deductive Processing using Lisp and Prolog

(This message has been sent to both the AIList and the Prolog Digest)

I am looking for some information concerning the following:

(1) The use of Prolog and Lisp for deductive and inferential processing.
(2) Standard methods of handling deductive and inferential processing
    in Prolog and Lisp.
(3) Any languages similar or different to Prolog and Lisp that have been
    used for deductive and inferential processing.
(4) What types of inferential and deductive processing cannot be done using
    Prolog ?  Using Lisp ?

Suggestions of applicable articles and research projects, as well as personal
observations would be greatly appreciated.  I am attempting to get a feel for
what kinds of things can and cannot be done to handle deductive and inferen-
tial processing with existing Logic/AI programming languages.

Responses (ASAP) will be greatly appreciated.  Please reply to:

        hildum%brandeis.csnet@csnet-relay.csnet

I will gladly post a summary to the net if there is enough interest in
the subject.

        Thank you,

                David W. Hildum

                US Mail:        Box 1417
                                Brandeis University
                                Waltham, Massachusetts
                                02254

------------------------------

Date: 21 Mar 84 20:44:16-PST (Wed)
From: decvax!cwruecmp!sundar @ Ucb-Vax
Subject: Expert Systems in VLSI
Article-I.D.: cwruecmp.1113

This is only a request.  Has any one documented the knowledge
acquisition techniques used for this application domain?
I conducted a few interviews with local VLSI experts and the
difficulty I had was the formulation of appropriate questions
to elicit maximum response.  Any references would be appreciated.
Thanks.

Sundar Iyengar
USENET:         decvax!cwruecmp!sundar
CSNET:          sundar@Case
ARPANET:        sundar.Case@Rand-Relay

Posted: 11:43:29 pm, Wednesday March 21, 1984.

------------------------------

Date: 21 Mar 84 9:24:14-PST (Wed)
From: harpo!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: "Explaining" expert system algorthims.
Article-I.D.: eosp1.715

References:

I think it is hopeless to demand that the algorithms
instanced by expert systems be well understood so they can be questioned.
Even when the algorithms can easily be printed, they will be hard for
any human being to comprehend, except in the most trivial systems.

Expert systems attempt to imitate one kind of human thinking, in which
what we call "judgment" plays a part.  I expect that as expert systems
become more sophisticated, they will become harder and harder to judge,
just as the think-work of human beings is hard to judge for quality.

True "artificial intelligence" systems will have these problems in spades.

Please note that we have already reached the point where ordinary
procedural software is hard to judge.  It's quite common to spend 18
months shaking down a moderate sized piece of software.
                                        - Toby Robison
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: 23 Mar 84 12:52:55-PST (Fri)
From: ihnp4!houxm!hou2d!wbp @ Ucb-Vax
Subject: color junk
Article-I.D.: hou2d.225

Re: color junk
From: Wayne Pineault <hou2d!wbp>

        From a homology point of view a circle and a plane are not the
same, but from the view of coloring they are the same, since you pick the
point at infinity on the plane to map inside one of the regions on the
sphere.
        Also, for a long time a closed coloring formula has been known for a
sphere with any number of donut holes and mobius strips attached, as long
as it was not a sphere!  If you just plugged in 0 for a sphere the answer
came out to 4, but the argument did not work for this case!!!
        There is a Springer-Verlag series of mathematics, and I saw this
formula there, but I don't remember it.

                                        Wayne Pineault

------------------------------

Date: 23 Mar 84 17:14:17-PST (Fri)
From: decvax!mcnc!ncsu!uvacs!erh @ Ucb-Vax
Subject: RE characterizing automata from I/O pairs
Article-I.D.: uvacs.1206

        The question is certainly interesting and very natural.  Not
surprisingly, it has been investigated in depth.  As a matter of fact,
the Moore theory of experiments (which is precisely the theory of
"characterizing automata from I/O pairs") was one of the subjects
investigated in the 50's which gave impetus to introduction and study
of regular languages.

        A nice little book by J.H. Conway ("Regular Algebra and Finite
Machines", Chapman and Hall, 1971) has a chapter-long summary of
results including an answer to your question about the bound on the
length of the characterizing experiment.  A few paraphrases:

Def.  An exact (n,m,p) machine is a Moore machine with n states, m input
symbols, and p output symbols, each output symbol being actually emitted
in some state.  (Take m = p = 2 if you want arguments in terms of bits.)

Theorem.  Two distinguishable states of an exact (n,m,p) machine can be
distinguished by some word of length at most n-p.

(That is, for any two distinguishable states p, q, there exists a word w
of length <= n-p such that the output corresponding to w will differ
depending on whether it is started in p or q.)

Theorem. If S is a set of at most s states of an exact (n,m,p) machine,
and some two states in S are distinguishable, then there exists a word
of length at most max( 0, n-p-s+2 ) which distinguishes some two states
in S.  Moreover, this bound is best possible.

Theorem. If we are (explicitely) given an exact (n,m,p) machine whose
states are all distinguishable, and told that it is initially in one
of a set S of at most s states, then we can specify an experiment of
length at most (t-1)(n-p-(t-2)/2) where t = min( s, n-p+2 ), after
application of which the resulting state will be known (so you find
your position in the machine in case you were "lost in S").  Moreover,
the bound is best possible.

        In the above an "experiment of length k" means an algorithm
which feeds input symbols depending on the observed outputs; k is
the number of symbols fed in.

        The following answers your question.  It is a paraphrase
of Theorems 9 & 11, pp. 12-14 of Conway's text (the original result
is due to Moore, improved slightly by Conway):

Theorem.  If you know that M is a strongly connected exact (n,m,p)
machine with pairwise distinguishable states, then there is an experiment
of length at most

                           2n-1  2
                        8 m     n  log n
                                      2

which tells you the structure of the machine.

Ed Howorka (erh@uvacs on CSNET)

------------------------------

Date: 22 Mar 84  1600 PST
From: Diana Hall <DFH@SU-AI.ARPA>
Subject: Compositional Temporal Logic

         [Forwarded from the SRI CSLI bboard by Laws@SRI-AI.]

                HOW COMPOSITIONAL CAN TEMPORAL LOGIC BE?

                       Speaker:  Prof. Amir Pnueli
                        Weizmann Institute, Israel

                       Tuesday, March 27, 2:30 p.m.
                       Room 352 Margaret Jacks Hall

Abstract:  A compositional proof system based on temporal logic is presented.
The system supports systematic development of concurrent systems by
specifying modules and then proving a specification for their combination.
The specifications of modules are expressed by temporal logic.

------------------------------

Date: Fri 23 Mar 84 18:28:23-EST
From: Jan <komorowski@MIT-OZ>
Subject: Logic Programming Seminars at Harvard

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                                SEMINAR

                        LOGIC PROGRAM DERIVATION

                             Danny Krizanc
                          Harvard University

                        Tuesday, April 3, 1984

                                4 PM
                              Aiken G23

Danny will present a work he has done in my course Technology of Logic
Programming on program transformation. The method of Burstall and
Darlington is translated into resolution-based theorem proving and
applied to logic programs. The method is subsequently extended beyond
the limits of the functional approach.

------------------------------


Date: Fri 23 Mar 84 18:28:23-EST
From: Jan <komorowski@MIT-OZ>
Subject: Logic Programming Seminars at Harvard

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                                COLLQUIUM

                APPLICATION OF PROLOG TO GENERATION OF TEST DATA
                        FROM ALGEBRAIC SPECIFICATIONS

                        Prof. Marie-Claude Gaudel
                        Universite de Paris-Sud

                        Monday, April 9, 1984
                                4 PM
                        Aiken Lecture Hall
                        Tea in Pierce 213 at 3:30

ABSTRACT: Functional testing or "black-box testing has been recognized
for a long time as an important aspect of software validation. With
the emrgence of formal specification methods it becomes possible to
found functional testing on a rigorous basis. This lecture presents a
method of generating sets of test data from algebraic specifications.
The method has been impelemted using Prolog. It turns out that Prolog
is avery well-suited tool for generating sets of test data in this context.

Host : Professor Henryk Jan Komorowski

------------------------------

Date: Thu, 22 Mar 84 17:52:39 PST
From: Tulin Mangir <tulin@UCLA-CS.ARPA>
Subject: Course in Expert Systems for CAD/CAT

UCLA School of Engineering, Computer Science Department
is offering a new course, in Spring Quarter, in the
area of applications of Expert Systems to CAD and CAT in general, and
to VLSI and WSI design and testing specifically.

A Brief description of the topics to be covered follows.
Some of the projects in this course are extensions of the projects
that are started in the "Testing and Design for Testability for VLSI"
class that  we are offerring once a year. I also teach that course.

I welcome any questions, comments, and suggestions and promise to
give a state of the course(!) report on line for those who are
interested.


Tulin E. Mangir
<cs.tulin@UCLA-CS>
(213) 825-2692
      825-1322 (secretary)

                -------------------------------------
UCLA COMPUTER SCIENCE DEPARTMENT

Spring 84

New Course on Expert Systems

CS259 Section 4

EXPERT SYSTEMS WITH APPLICATIONS TO CAD AND CAT

Instructor: Professor Tulin E. Mangir

Time: MW 4-6pm (TBA)

FIRST MEETING IN 5252 BOELTER HALL, W 4-6PM 4/4/84.


This course is open to all graduate students who are interested
in developing and applications of expert systems.
Students are encouraged to develop projects using the
tools and environments available at UCLA or otherwise.
Instructor's special interest is developing expert systems for design
and testability analysis of VLSI and WSI.

For any questions please contact instructor 825-2692, or 3532L Boelter Hall.

Course Outline:

 o Introduction
 o Organization of Expert Systems
 o Representation of Digital structure and behaviour
 o Requirements for data base, rule base, knowledge base design and interfaces
   between them; control structure
 o Languages, logic programming (PROLOG), frameworks
 o Application domains for expert systems in CAD, CAT and automated processing
 o Example systems under development-- DRC, 2-D Planner, Hitest, Excat, others.
 o Limitations
 o Future directions

------------------------------

End of AIList Digest
********************

∂29-Mar-84  0017	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #36
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Mar 84  00:17:09 PST
Date: Wed 28 Mar 1984 23:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #36
To: AIList@SRI-AI


AIList Digest           Thursday, 29 Mar 1984      Volume 2 : Issue 36

Today's Topics:
  AI in Criminology - Request,
  AI Reports - NASA Memoranda Request,
  Expert Systems - Software Development Request,
  Planning - JPL Planner Request,
  AI Environments - Micro O/S Request,
  Expert Systems - Explanatory Capability & Article & OPS5 Examples,
  Seminars - Machine Learning & Incomplete Databases & Control of Reasoning
----------------------------------------------------------------------

Date: Mon, 26 Mar 84 21:00:38 pst
From: jacobs%ucbkim@Berkeley (Paul Jacobs)
Subject: information wanted on AI in criminology

        I have been asked for information concerning AI applications in
criminology, particularly in locating and keeping track of criminals.  I
am aware of various uses of computers in analyzing fingerprints and other
data; however, I have not heard of successful ``intelligent'' programs.

        I'd appreciate any information on this matter.

Thanks,

--paul

------------------------------

Date: 26 Mar 84 12:40:01-PST (Mon)
From: hplabs!hao!seismo!rochester!jss @ Ucb-Vax
Subject: NASA tech. memorandum
Article-I.D.: rocheste.5853

I am trying to get:
        NASA Technical Memorandum 85836, June  1983;
        NASA Technical Memorandum 85838, Sept. 1983;
        NASA Technical Memorandum 85839, Oct.  1983.

They are Volume I of their report entitled:
        An overview of Artificial Intelligence and Robotics

"This report is part of the NBS/NASA series of overviews on AI and
Robotics."  Any help in getting an official draft would be greatly
appreciated.  (Copies aren't bad either.)  A path to NASA, if one exists,
would also be appreciated.  Thanks in advance.


                        Jon S. Stumpf
                        U. of Rochester
                        {allegra|decvax|seismo}!rochester!jss

[I have sent Jon a copy of the NTIS ordering info that I printed in
AIList V1 #81 back in October.  This included the first of the reports
mentioned above; I am not sure about the others since the serial numbers
I have are for the NTIS version.  The Gevarter overviews I have read
seem to be reasonably good summaries of the major projects in vision
and expert systems.  -- KIL]

------------------------------

Date: 26 Mar 84 15:40:19-PST (Mon)
From: ihnp4!ihuxf!dunk @ Ucb-Vax
Subject: Expert Systems for Software Development?
Article-I.D.: ihuxf.2119

Anyone have references to papers describing the use of expert systems
in a software development environment (e.g. program synthesis,
programmer's consultant, debugging aid, etc.)?  Thanks much.
        Tom Duncan
        AT&T Bell Laboratories
        ihnp4!ihuxf!dunk

------------------------------

Date: Wed, 28 Mar 84 18:04:42 CDT
From: Mike Caplinger <mike@rice.ARPA>
Subject: JPL Planner

Can anybody give me any references to the Jet Propulsion Lab's
"autonomous space probe" project?  This system is supposed to be able
to schedule different observations in a limited time frame (like a
planetary flyby) based on priorities and feedback from previous results.
Is it really AI or just some kind of optimization hack?

                thanks,
                Mike

------------------------------

Date: Wed, 28 Mar 84 12:39:59 pst
From: Peter T. Young <youngp%cod@Nosc>
Subject: 32/16-bit O/S Information Request

We would like to obtain descriptions of/sources for the following
operating systems:
      RMX86
      CP/M (Z80 & 8085)
      CP/M-86
      MS-DOS (Z-DOS)
      UNIX
      VMS
      TOPS-20
that could be run on 32/16-bit or 32/32-bit CPU-based microcomputer
systems which are either already in production, or are scheduled for
production in the near future.  Our aiming-point is a system that will
run a reasonably useful version of LISP or PROLOG in a real-time environ-
ment.

Could you provide us with some pointers for such information?  Any help
you might provide would prove extremely useful.  Thanks for considering
this request.
                               Peter T. Young
                               (Code 9411)
                               NOSC
                               San Diego, CA 92152
                               (619) 225-6686
                               <youngp@NOSC>

------------------------------

Date: 27 Mar 84 14:45:44 EST  (Tue)
From: Dana S. Nau <dsn%umcp-cs.csnet@csnet-relay.arpa>
Subject: expert system algorithms

        From: Toby Robison <eosp1!robison>

        I think it is hopeless to demand that the algorithms instanced by
        expert systems be well understood so they can be questioned.  Even
        when the algorithms can easily be printed, they will be hard for any
        human being to comprehend, except in the most trivial systems. ...

I disagree.  One of the reasons for separating an expert system's control
structure from the knowledge base is to allow for complex behavior with a
simple control algorithm.  For example, Mycin's control structure is only
about one typewritten page [1].  Jim Reggia and I at the Univ. of Maryland
are currently working on a considerably more complex expert system control
structure, but even it is not THAT hard to understand once one understands
the preliminary mathematical background [2].  We even have a proof of
correctness for the algorithm!

REFERENCES:

[1] Davis, Buchanan, and Shortliffe.  Production Rules as a Representation
for a Knowledge-Based Consultation Program.  ARTIFICIAL INTELLIGENCE 8
(1977), 15-45.

[2] Reggia, Nau, and Wang.  A Theory of Abductive Inference in Diagnostic
Expert Systems.  TR-1338, Computer Sci.  Dept., Univ.  of Maryland (Dec.
1983).  Submitted for Publication.

------------------------------

Date: Sun 25 Mar 84 22:40:03-PST
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM.ARPA>
Subject: SCIENCE, 23 Mar. 1984

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The aforementioned issue of SCIENCE has a "feature story" on knowledge-
based systems in general and expert systems in particular (p1279-1282),
with various luminaries and their luminary programs mentioned. Pretty
good article.

Ed Feigenbaum

------------------------------

Date: 23 Mar 84 19:55:52-PST (Fri)
From: pur-ee!uiucdcs!parsec!ctvax!pozsvath @ Ucb-Vax
Subject: Re: Request for OPS5 examples
Article-I.D.: uiucdcs.6361

;;;                                                       /
;;;                                             Peter Ozsvath
;;;
;;;                     Factorial Program in OPS-5
;;;              Simulates the stack in working memory
;;;

;;;  Stage 1. Fill the working memory with (fact 1) (fact 2)...(fact n)
;;;
(p fact0 (fact {<x> = 0}) --> (remove 1) (make factorial 1 1))

(p factn (fact {<n> > 0}) --> (make fact (compute <n> - 1)))

;;;  Negative number entered. Quit
(p factneg (fact {<n> < 0}) -->
    (write "Good-bye."))

;;;  Stage 2. Sweep out the unnecessary (fact k) and (factorial k (k-1)!)
;;;     and add new (factorial (k+1) k!
;;;
(p factorial (factorial <x> <y>) (fact <x>) -->
    (remove 2)
    (remove 1)
    (make factorial (compute (<x> + 1)) (compute (<x> * <y>))))

;;;  When no more (fact k) statements are left, factorial k is fount
;;;
(p factorial2 (factorial <x> <y>) -(fact <x>) -->
    (write (crlf))
    (write (compute <x> - 1))
    (write "!! =")
    (write <y>)
    (write (crlf))
    (make infinifact))

;;;  Called once at the beginning
;;;
(p pretty←fact (start) -->
    (write "Program to demonstrate the power, compactness, and robustness")
    (write (crlf))
    (write "of the Winning OPS 5. This program inputs numbers whose")
    (write (crlf))
    (write "factorials - lo and behold - it computes RECURSIVELY")
    (write (crlf))
    (write (crlf))
    (remove 1)
    (make infinifact))

;;;  Circular "loop" that reads in numbers
;;;
(p infinifact (infinifact) -->
    (remove 1)
    (write "Enter a positive number to compute its factorial,")
    (write (crlf))
    (write "or a negative one to quit")
    (write (crlf))
    (bind <x> (accept))
    (make fact <x>))

;;;  Start
(start ((start)))

This program computes factorials in OPS-5. Some things seem to be
rather difficult to do in OPS-5. This same program could be
written in several lines of lisp!

------------------------------

Date: 26 Mar 84 10:06:31 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: KELLER SPEAKING AT ML ON WED.

             [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                      MACHINE LEARNING BROWN BAG SEMINAR

Speaker:   Richard Keller
Date:      Wednesday, March 28, 1984 - 12:00-1:30
Location:  Hill Center, Room 254


  Placing Learning in Context:

        SOURCES OF CONTEXTUAL KNOWLEDGE FOR CONCEPT LEARNING

 (Alternatively titled: The Mysterious Origins of LEX's Learning Goal)


In this talk, I  will describe a new  source of knowledge for  concept
learning: knowledge of the  learning context.  Most previous  research
in machine learning has failed to recognize contextual knowledge as  a
distinct and useful form of learning knowledge.  Contextual  knowledge
includes, among other  things, knowledge of  the purpose for  learning
and knowledge of the performance task to be improved by learning.  The
addition of this meta-knowledge, which describes the learning process,
provides a broader perspective on learning than has been available  to
most previous  learning systems.

In general, learning  systems that omit  contextual knowledge have  an
insufficient vantagepoint from which  to supervise learning  activity.
Both AM [Lenat-79] and LEX  [Mitchell-83], for instance, were  limited
by an  inability to  adapt  to changes  in their  respective  learning
environments, even  when  the  changes  were a  result  of  their  own
learning behavior.  This  limitation is  not particularly  surprising;
neither of these systems contained  an explicit representation of  the
task they were  performing (specifically,  mathematical discovery  and
integral calculus  problem  solving,  respectively).   Nor  did  these
systems contain any knowledge about the relationship between  learning
and the  task  performance.   Before  it is  reasonable  to  expect  a
learning system to  adapt to changes  in the task  environment, it  is
necessary  to  represent  task  knowledge  and  to  incorporate   this
knowledge into learning procedures.   My research, therefore,  focuses
on the representation  and use  of contextual  knowledge --  including
task knowledge -- as guidance for concept learning.

In this talk, I will  describe a learning framework that  incorporates
the use  of  contextual  knowledge.  In  addition,  I  will  introduce
various alternative methods of representing contextual knowledge,  and
sketch the design of some learning algorithms that utilize  contextual
knowledge.  Examples will be   drawn, in large part,  from my work  on
incorporating contextual knowledge within the LEX learning system.

------------------------------

Date: 27 Mar 84 14:22:39 EST
From: DSMITH@RUTGERS.ARPA
Subject: Rutger's Computer Science Colloquium

              [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                        Department of Computer Science

                                  COLLOQUIUM


SPEAKER:       Dr. Witold Lipski, Jr.
               Polish Academy of Sciences

TITLE:         LOGICAL PROBLEMS RELATED TO INCOMPLETE INFORMATION IN DATABASES


A  general methodology  for  modeling  incomplete  information in databases is
described, and then illustrated in the case of  three  concrete  models  of  a
database.   We emphasize the distinction between two different interpretations
of a query language --  the  external  interpretation,  which  refers  queries
directly  to  the  real world  modeled  by  the  database;  and  the  internal
interpretation, which refers queries  to  the  information  about  this  world
available  in  the  database.  Our methodology stresses the need for a precise
definition of the semantics of the query language by means of a non-procedural
specification,   and   for   a   correct   procedural implementation  of  this
specification.  Various logical -- and, at  times, combinatorial  --  problems
connected  with  information  incompleteness  are discussed.   Related work is
surveyed and an extensive bibliography is included.

DATE:           Friday, March 30, 1984
TIME:           2:50 p.m.
PLACE:          Room 705 - Hill Center
                                Coffee at 2:30

------------------------------

Date: 28 Mar 1984  09:48 EST (Wed)
From: Crisse Ciro <CRISSE%MIT-OZ@MIT-MC.ARPA>
Subject: Genesereth Talks on Control of Reasoning

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


             Procedural Hints in the Control of Reasoning

                        Michael R. Genesereth
                     Computer Science Department
                         Stanford University

                      DATE: Thursday, March 29
                      TIME: 4:00 PM
                     PLACE: NE43 8th Floor Playroom

          [This talk is also being given at IBM San Jose on
          Friday, April 6, 10:00.  -- KIL]


One of the key problems in automated reasoning is control of
combinatorics.  Whether one works forward from given premises or
backward from desired conclusions, it is usually necessary to consider
many inference paths before one succeeds in deriving useful results.
In the absence of advance knowledge as to which path or paths are
likely to succeed, search is the only alternative.

In some situations, however, advance knowledge is available in the
form of procedural hints like those found in math texts.  Such hints
differ from facts about the subject of reasoning in that they are
prescriptive rather than descriptive; they say what a reasoner OUGHT
to do rather than what is TRUE.

This talk describes a language for expressing hints to control the
process of reasoning and provides an appropriate semantic account in
the form of an interpreter that behaves in accordance with the hints.
The work is relevant to understanding the phenomenon of introspection
and is of practical value in the construction of expert systems.


HOST: Prof. Randy Davis

------------------------------

End of AIList Digest
********************

∂29-Mar-84  1401	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #37
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Mar 84  14:01:31 PST
Date: Thu 29 Mar 1984 10:13-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #37
To: AIList@SRI-AI


AIList Digest            Friday, 30 Mar 1984       Volume 2 : Issue 37

Today's Topics:
  Expert Systems - Judicial Expert Systems & Radiography,
  Linguistics - Use of 'And',
  Bibliography - Fuzzy Set Papers,
  Proposals - AI Teaching & Hierarchical System Research,
  Seminars - Objects and Parts & Pattern Recognition & Databases
----------------------------------------------------------------------

Date: Thu, 29 Mar 84 07:49 PST
From: DSchmitz.es@Xerox.ARPA
Subject: Judicial Expert Systems?

I'd like to know if there's any work going on out there toward the
development of expert systems (or other AI-type systems) designed to
assist in making legal decisions.  Such systems as I have in mind would
be used by judges, lawyers, legal theorists, perhaps even international
courts.

Please reply to DSchmitz.es@PARC-MAXC

Thank you

[I believe there has been work at Stanford and at Yale.  I also remember
reading some newspaper account of a man who wishes to market an
automated jury: each side types in its legal precedents and
the computer decides which side wins.  AIList carried a seminar
notice the Stanford work last year.  Can anyone give more specific
information?  -- KIL]

------------------------------

Date: 26 Mar 84 11:28:10-PST (Mon)
From: decvax!linus!vaxine!debl @ Ucb-Vax
Subject: Help on Radiography Discussion

I have been told that a discussion of expert systems to read radiographs
occured on the net recently.  Any information or references from this
discussion would be appreciated.  Thank you.

                David Lees

[There was a message by Dr. Tsotsos about his group's work at U.
of Toronto on the ALVEN system for interpreting heart images.
You might also inquire on Vision-List (Kahn@UCLA-CS); it has not
discussed this topic, but you might get a discussion started.
Dr. Jack Sklansky and associates have been developing systems
to find tumors in chest radiographs; they might be considered
"expert systems" in the sense that their performance is very
good.  Chris Brown, Dana Ballard, and others at the U. of Rochester
have been using hypothesize-and-test and other AI techniques in the
analysis of chest radiographs and ultrasound heart images.  -- KIL]

------------------------------

Date: 24 Mar 84 2:49:00-PST (Sat)
From: decvax!cca!ima!inmet!andrew @ Ucb-Vax
Subject: Re: Use of 'and'
Article-I.D.: inmet.1150

I haven't heard of that one, but there was an article recently (in
Datamation?) about a natural language processing system which
repeatedly gave no results when asked for "all customers in Ohio
and Indiana".  Of course, no customer can be in both states
at once; the question should have been phrased as ".. Ohio *or*
Indiana".  When this was pointed out, the person using the
program commented something to the effect of "Don't tell *me*
how to think!"

------------------------------

Date: Tue, 27 Mar 1984 11:13:27 EST
From: David M. Axler <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Fuzzy Set Papers

     Some very interesting early work on the applications of fuzzy set
theory to language behavior was done at the Language Behavior Research
Laboratory out at U. Cal - Berkeley.  Much of this was later available
via the Lab's series of Working Papers and Monographs.  Of interest to
AI researchers concerned w/language processing and/or fuzzy sets are:

Monograph #3, "Natural Information Processing Rules:  Formal Theory and
  Applications to Ethnography", William H. Geoghegan, 2/73.

Working Paper #43, "Basic Objects in Natural Categories", Eleanor Rosch,
Carolyn B. Mervis, Wayne Gray, David Johnson, and Penny Boyes-Braem, 1975.

Working Paper #44, "Color Categories as Fuzzy Sets", Paul Kay and Chad
McDaniel, 1975.

My list of the available papers is severely out of date, and I strongly
suspect that there's a fair amount of later work also available.  Those
interested should write to the lab, as follows:

University of California
Language Behavior Research Laboratory
2220 Piedmont Avenue
Berkeley, CA 94720

(If anyone out at Berkeley would like to fill the list in on more recent
and relevant work from the lab, great...)

  --Dave Axler

------------------------------

Date: 26 Mar 84 12:12:42-PST (Mon)
From: harpo!ulysses!burl!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Teaching Proposal
Article-I.D.: psuvax.919

My main interest is Artificial Intelligence, although I define that rather
broadly: to me, AI is the field which unifies all others.  Philosophy,
psychology, mathematics, compilers, languages (both programming and natural),
"systems" (both of the CS and of the EE variety), data structures, machine
architecture, discrete representations, continuous representations, cognitive
science, art, history, music, business administration, political science,
etc, etc, etc, are ALL subfields of AI to me.  They all represent specific
domains in which intelligent activity is studied and/or mechanized.

I'm sure many agree with me that what the American educational system
needs is a program integrating computer "literacy" with critical thinking
abilities in many other domains.  I do not mean "literacy" in the "Oh yes,
I can run statistical packages" sense.  I mean an approach to critical
thinking built on the foundations of the computational paradigm -- the view
that knowledge and understanding can be represented explicitly, and that one
can discover procedures for manipulating those representations in order to
solve real problems.  Such a program could form the backbone of a very
stimulating university-wide undergraduate "core" program integrating not
only mathematics and the physical sciences but communications skills and
all the "liberal arts" as well.

I visualize such a program as presenting a coherent and integrated approach
to the cognitive skills most important for healthy and productive functioning
in the modern world.  It would present the major principles of cognition as
seen through the organizing principles of information processing.

This is more than an approach to teaching.  To me, it is also the seed of new
approaches to machine learning and cognitive modeling.  It uses undergraduate
education as an experimental testbed for research in AI, psychology,
linguistics, and social systems.  That "cutting edge" fervor alone should
make it very interesting to students.

Bob Giansiracusa
Computer Science Dept, Penn State U, 814-865-9507 (ofc), 814-234-4375 (home)
Arpa:   bobgian%PSUVAX1.BITNET@Berkeley
UUCP:   bobgian@psuvax.UUCP            -or-    ..!allegra!psuvax!bobgian
Bitnet: bobgian@PSUVAX1.BITNET         CSnet:  bobgian@penn-state.CSNET
USmail: PO Box 10164, Calder Square Branch, State College, PA 16805

------------------------------

Date: 26 Mar 84 11:38:27-PST (Mon)
From: harpo!ulysses!burl!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Proposed Research Description
Article-I.D.: psuvax.918

ADAPTIVE COGNITIVE MODEL FORMATION

The goal of this work is the automatic construction of models which can
predict and characterize the behavior of dynamical systems at multiple
levels of abstraction.

Numeric models used in simulation studies can PREDICT system behavior
but cannot EXPLAIN those predictions.  Traditional expert systems can
explain certain predictions, but their accuracy is usually limited to
qualitative ("symbolic") statements.  This research effort attempts to
couple the explanatory power of symbolic representations with the
precision and testability of numeric models.

Additionally, the computational burden implicit in the use of numeric
simulation models rapidly becomes astronomical when accurate performance
is needed over large domains (fine sampling density).

The solution my work explores consists of developing AUTOMATICALLY
a hierarchical sequence of SYMBOLIC models which convey QUALITATIVE
information of the sort that a human analyst generates when interpreting
numeric simulations.  These symbolic models portray system behavior at
multiple levels of abstraction, allowing symbolic simulation and inference
procedures to optimize the "run time" versus "accuracy" tradeoff.

I profess the philosophical bias that the study of learning and modeling
mechanisms can proceed productively in a relatively domain-independent
manner.  Obviously, domain-specific knowledge will speed the solution search
process.  Such constraints can be regarded as "seeds" for search in a process
whose algorithm is largely domain-independent.  Anecdotal support for this
hypothesis comes from the observation that HUMANS can become expert at theory
and model formation in a wide variety of different domains.

Bob Giansiracusa
Computer Science Dept, Penn State U, 814-865-9507 (ofc), 814-234-4375 (home)
Arpa:   bobgian%PSUVAX1.BITNET@Berkeley
UUCP:   bobgian@psuvax.UUCP            -or-    ..!allegra!psuvax!bobgian
Bitnet: bobgian@PSUVAX1.BITNET         CSnet:  bobgian@penn-state.CSNET
USmail: PO Box 10164, Calder Square Branch, State College, PA 16805

------------------------------

Date: Wed, 28 Mar 84 10:09:15 pst
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: UCB Cognitive Science Seminar--April 3

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

             BERKELEY COGNITIVE SCIENCE PROGRAM

                        Spring 1984

            IDS 237B - Cognitive Science Seminar

        Time:         Tuesday, April 3, 1984, 11-12:30pm
        Location:     240 Bechtel

              OBJECTS, PARTS  AND  CATEGORIES
        Barbara Tversky, Dept. of Psychology, Stanford

Many psychological, linguistic and anthropological  measures
converge  to a preferred level of reference, or BASIC LEVEL,
for common categories; for example, TABLE, in lieu of FURNI-
TURE  or KITCHEN TABLE.  Here we demonstrate that knowledge
of categories  at  that  level  (and  only  that  level)  of
abstraction is dominated by knowledge of parts.  Basic level
categories are perceived to share parts and to  differ  from
one  another  on the basis of other features.  We argue that
knowledge of part configuration underlies the convergence of
perceptual,  behavioral and linguistic measures because part
configuration plays a large  role  in  both  appearance  and
function.  Basic level categories are especially informative
because structure is linked to function via  parts  at  this
level.

*****  Followed by a lunchbag discussion with speaker  *****
***  in the IHL Library (Second Floor, Bldg. T-4) from 12:30-2  ***

------------------------------

Date: 28 Mar 1984 09:28:05-PST (Wednesday)
From: Guy M. Lohman <LOHMAN%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR.IBM-SJ@csnet-relay.arpa>
Subject: IBM San Jose Research Laboratory calendar of Computer
         Science seminars 2-6 April 84

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193

  Thurs., April 5 Computer Science Colloquium
  3:00 P.M.   MINIMUM DESCRIPTION LENGTH PRINCIPLE IN MODELING
  Auditorium  Traditionally, statistical estimation and modeling
            involve besides certain well established procedures,
            such as the celebrated maximum likelihood technique,
            a substantial amount of judgment.  The latter is
            typically needed in deciding upon the right model
            complexity.  In this talk we present a recently
            developed principle for modeling and statistical
            inference, which to a considerable extent allows
            reduction of the judgment portion in estimation.
            This so-called MDL-principle is based on a purely
            information theoretic idea.  It selects that model in
            a parametric class which permits the shortest coding
            of the data.  The coding, of which we only need the
            length in terms of, say, binary digits, must,
            however, be self-containing in the sense that the
            description of the parameters themselves needed in
            the imagined encoding are included.  For this reason,
            the optimum model cannot possibly be very complex
            unless the data sample is very large.  A fundamental
            theorem gives an asymptotically valid formula for the
            shortest possible code length as well as for the
            optimum model complexity in a large class of models.
            For short samples no simple formula exists, but the
            optimum complexity can be estimated numerically and
            taken advantage of.  Finally, the principle is
            generalized so as to allow any measure for a model's
            performance such as its ability to predict.

            J. Rissanen, San Jose Research
            Host:  P. Mantey

  Fri., April 6 Computer Science Seminars
  Auditorium

            KNOWLEDGE AND DATABASES (11:15)

            We define a knowledge based approach to database
            problems.  Using a classification of application from
            the enterprise to the system level we can give
            examples of the variety of knowledge which can be
            used.  Most of the examples are drawn from work at
            the KBMS Project in Stanford.  The objective of the
            presentation is to illustrate the power but also the
            high payoff of quite straightforward artificial
            intelligence applications in databases.
            Implementation choices will also be evaluated.
            G. Wiederhold, Stanford University
            Host:  J. Halpern

   ---------------------------------------------------------------

  Visitors, please arrive 15 mins. early.  IBM is located on U.S. 101
  7 miles south of Interstate 280.  Exit at Ford Road and follow the signs
  for Cottle Road.  The Research Laboratory is IBM Building 028.
  For more detailed directions, please phone the Research Lab receptionist
  at (408) 256-3028.  For further information on individual talks,
  please phone the host listed above.

  IBM San Jose Research mails out both the complete research calendar
  and a computer science subset calendar.  Send requests for inclusion
  in either mailing list to CALENDAR.IBM-SJ at RAND-RELAY.

------------------------------

End of AIList Digest
********************

∂29-Mar-84  2317	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #38
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Mar 84  23:17:06 PST
Date: Thu 29 Mar 1984 22:13-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #38
To: AIList@SRI-AI


AIList Digest            Friday, 30 Mar 1984       Volume 2 : Issue 38

Today's Topics:
  Planning - JPL Planning System,
  Expert Systems - Legal Expert Systems,
  Architectures - Concurrency vs. Parallelism,
  News - New VLSI CAD Interest List,
  Seminars - Concurrency and Logic
----------------------------------------------------------------------

Date: 29 Mar 1984 1634-PST
From: WAXMAN@USC-ECL.ARPA
Subject: JPL Planning System


Len Friedman at JPL; Friedman@USC-ECLA did the work on the planning
system someone asked about.

[...]

MILT WAXMAN
WAXMAN@USC-ECLA

------------------------------

Date: Thu 29 Mar 84 15:18:10-PST
From: Anne Gardner <GARDNER@SU-SCORE.ARPA>
Subject: Legal expert systems

The seminar notice you referred to in Ailist was my oral.  I'm still
finishing off the dissertation, which is called "An Artificial Intelligence
Approach to Legal Reasoning."  For a sketch of what it's about, see
the 1983 AAAI proceedings, in which I had a paper.

--Anne Gardner

[Jeff Rosenschein@SU-SCORE also pointed out Anne's work. -- KIL]

------------------------------

Date: 29 Mar 84 17:36:35 EST
From: MCCARTY@RUTGERS.ARPA
Subject: judicial expert systems

I saw your query in the recent AILIST Digest.  Are you familiar with the
TAXMAN project at Rutgers?  Strictly speaking, this is not a "judicial expert
system," since our goal at the present time is not to build a large practical
system for use by lawyers.  Instead, we are exploring a number of theoretical
issues about the representation of legal rules and legal concepts, and the
process of legal reasoning and legal argumentation.  We believe that this
is an essential step for the construction of sophisticated expert systems
for lawyers in the future.  Some recent references:

    McCarty, L.T., "Permissions and Obligations," IJCAI-83, pp. 287-294.

    McCarty, L.T., and Sridharan, N.S., "The Representation of an Evolving
        System of Legal Concepts: II. Prototypes and Deformations," IJCAI-81,
        pp. 246-253.

    McCarty, L.T., and Sridharan, N.S., "A Computational Theory of Legal
        Argument," Technical Report LRP-TR-13, Laboratory for Computer
        Science Research, Rutgers University (1982).

    McCarty, L.T., "Intelligent Legal Information Systems:  Problems and
        Prospects," Rutgers Computer and Technology Law Journal, Vol. 9,
        No. 2, pp. 265-294 (1983).

This latter article articulates some of our ideas about practical systems,
and discusses several related projects by other researchers.


Thorne McCarty

------------------------------

Date: Thu, 29 Mar 84 17:52:33 PST
From: Philip Kahn <kahn@UCLA-CS.ARPA>
Subject: Non-Von's are not Von's

        The  ``parallel  architectures  community'' has mostly been interested
        in novel computer architectures to accelerate numeric computation
        (usually  represented as Fortran codes).

        What are the fundamental characteristics of AI computation that
        distinguish it from more conventional computation?
        Indeed, are there really any differences at all?


        I disagree with the claim that the "parallel architectures community"
has been trying to find a parallel Fortran.  Indeed, that is not possible,
since the best that could be attained would be *concurrent seriality*.
On the whole, I feel acceleration of numerical computation is not the primary
goal of those researching parallel architectures.  Rather, I feel the primary
thrust of this work has been to define inherently parallel structures and
their possible applications.

        Before we all espouse our personal viewpoints on this subject, I
feel it might be useful to agree upon our terms; they seem to vary from
person to person.  *Serial* is a single step move through a computation.
*Concurrent serial* is the simultaneous processing of more than one
serial computation.  *Parallel* is the local computation of global properties
by dedicated processors.

        Yes! There are differences between AI-motivated parallel
computation and conventional computation.  Conventional computation runs
on your standard store-bought Von Neumann machine that runs in a *serial*
fashion.  "Pseudo-conventional" machines are able to run *concurrent serial*
programs (e.g., Ada, Concurrent Pascal, etc.) utilizing several Von Neumann
processors.  *A truly parallel machine computes global properties based upon
local criteria.*  Each "criteria" is locally computed via a dedicated
processor.  The design of parallel machines is a tough problem.
A growing number of researchers feel that
*cellular automata* are the building block of all parallel structures.
The design of parallel machines using cellular automata involves the design
of local consistency conditions, oscillation behavior, equilibrium effects,
and a myriad of other non-conventional subjects.
Thus, I feel that there are in fact significant differences between parallelism
and "conventional" methods.

------------------------------

Date: Thu, 29 Mar 84 12:54 PST
From: ANDERSON.ES@Xerox.ARPA
Reply-to: Anderson <Anderson.es@Xerox.ARPA>
Subject: NEW VLSI CAD INTEREST DL

This is to announce a new distribution list for the purpose of
discussing issues and exchanging ideas pertaining to VLSI Computer Aided
Design and Layout.

I hope for this DL to encompass a broad range of topics including but
not limited to: VLSI CAD/CAE/CAM hardware, software, layout, design,
techniques, programming, fracturing, PG, plotting, maintenance, vendors,
bugs, workstations, DRC, ERC, system management, peripheral equipment,
design verification, testing, benchmarking, archiving procedures, etc.
etc.

The distribution list itself resides on the Xerox Ethernet.  Ethernet
users can send messages to CADinterest↑.es.  Arpanet, Milnet, Usenet,
and other Internet users can send messages to CADinterest↑.es@PARC-MAXC.
[You will probably need to use quotes to get the special symbol through
your mailer: "CADinterest↑.es"@PARC-MAXC.  -- KIL]

[...]

Anyone on the Xerox Ethernet may add themselves using Maintain.
Arpanet, Milnet, Usenet, and other Internet users should send a request
to me (Anderson.es@PARC-MAXC) and I will add you to the DL.  I will also
add whole DL's if requested by the owner.

For now, there are no rules set for this DL.  Depending on how large it
gets, I hope to keep it as anything goes and see what happens for a
while.  I will wait a week before sending any messages to the DL in
order to allow people to be added to the DL.  If we get some good
informative discussions going, I will try to archive the responses or
maybe go to a digest format.  Thank you for your indulgance.

Craig Anderson
VLSI CAD Lab Supervisor
Xerox Corp.
El Segundo, Ca.
213-536-7299

------------------------------

Date: Wed 28 Mar 84 23:40:00-PST
From: Al Davis <ADavis at SRI-KL>
Subject: John Conery Seminar Friday the 30th

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                               Seminar

                           Friday, March 30
                 10:00 a.m. in the AI Conference Room

           Fairchild AI Labs, 4001 Miranda Ave., Palo Alto

                                  by

                            John S. Conery
                         University of Oregon

Title:  The AND/OR Process Model for Parallel Interpretation of Logic
Programs.

Abstract:  In contrast to the traditional depth first sequential process
tree search used for logic program evaluation, this talk presents the AND/OR
process model.  It is a method for interpretation by a system of
asynchronous, independent processes that communicate only by messages.
The method makes it possible to exploit two distinct forms of
parallelism.  OR parallelism is obtained from evaluating
nondeterministic choices in parallel.  AND parallelism arises in the
execution of deterministic functions, such as matrix multiplication or
divide and conquer algorithms, that are inherently parallel.  The two
forms of parallelism can be exploited at the same time.  This means
AND parallelism can be applied to clauses that are composed of several
nondeterministic components, and it can recover from incorrect choices
in the solution of these components.  In addition to defining parallel
computations, the model provides a more defined procedural semantics
for logic programs; that is, parallel interpreters based on this model
are able to generate answers to queries that cause standard
interpreters to go into an infinite loop.  The interpretation method
is intended to form the theoretical framework of a highly parallel non
von Neumann computer architecture; the talk concludes with a
discussion of issues involved in implementing the abstract interpreter
on a multiprocessor.
                                                al

Notes to visitors:  Arrive at Fairchild between 9:45 and 10:00 and go
to the guard and tell him you are there to visit Al Davis at X4385.
They will call me and someone will come down and get you and haul you
off to the AI conference room.

------------------------------

Date: 29 Mar 84  1157 PST
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Seminar in foundations of mathematics (Professor Kreisel)

[Forwarded from the CSLI bboard by Laws@SRI-AI.]

Organizational meeting

TIME:   Tuesday, Apr. 3, 4:15 PM
PLACE:  Philosophy Dept. Room 92 (seminar room)
TOPIC:  Logic and parallel computation.

We will begin by examining some recent papers where
parallel computation is used in interesting ways
to obtain better algorithms.

The logical part will be to investigate how efficient
algorithms using parallel computation might be extracted
from infinite proof trees by applying transformations
that use only finite amounts of information.

At the first meeting these ideas will be explained in some more detail.
Ideas and suggestions will be welcome.

The seminar is scheduled to meet Tuesdays at 4:15, but can
be changed if there are conflicts.

------------------------------

End of AIList Digest
********************

∂31-Mar-84  1655	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #39
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 31 Mar 84  16:55:18 PST
Date: Sat 31 Mar 1984 15:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #39
To: AIList@SRI-AI


AIList Digest           Saturday, 31 Mar 1984      Volume 2 : Issue 39

Today's Topics:
  Applicative Simulation - Request,
  AI Tools - Request,
  Distributed Programs - Request,
  AI - Definition of AI Problems,
  Expert Systems - Explanatory Capability & Legal Systems & NY Times,
  Seminar - Theory of the Learnable,
  Course - System Description Languages
----------------------------------------------------------------------

Date: 2 Apr 84 12:50:24-EST (Mon)
From: ihnp4!houxm!hogpc!houti!ariel!vax135!ukc!srlm @ Ucb-Vax
Subject: applicative / functional simulation
Article-I.D.: ukc.4146

        I am looking for information on functional/applicative
        simulators of anything (communications protocols, to
        cite a good one) written in a PURELY applicative/functional
        style (no setq's please).

        If you know of anything about this (papers, programs, etc)
        I'd be very grateful if you could mail the pointers to me.


        Silvio Lemos Meira

        UUCP: ...!vax135!ukc!srlm
        computing laboratory
        university of kent at canterbury
        canterbury ct2 7nf uk
        Phone: +44 227 66822 extension 568

------------------------------

Date: Thu, 29 Mar 84 16:19 cst
From: Bruce Shriver <ShriverBD%usl.csnet@csnet-relay.arpa>
Subject: request for references

I would like to be referred to either one or two seminal papers or one or
two highly qualified persons in the following areas (if you send me the
name of an individual, the person's address and phone number would also
be greatly appreciated):

  1. Tutorial or survey papers on logic programming, particularly those
     dealing with several different language approaches.

  2. Reusable Software (please give references other than the Proceedings
     of the Workshop on Reuseability in Programming which was held in
     Newport, RI last September).

  3. Your favorite formal specification technique that can be applied to
     large scale, complex systems.  Examples demonstrating the completeness
     and consistency of a set of specifications for real systems.

  4. Integrated programming environments such as Cedar and Spice versus
     the Ada-style environments (APSEs, etc.).  Discussions on the
     relative merits of these two kinds of environments.

  5. Knowledge Based System Architectures (i.e., support of knowledge
     based systems from both the hardware and software point of view).
     Knowledge representation and its hardware/software implications.
     The relationship between "knowledge bases" and "data bases" and
     the relationship between knowledge base systems and data base
     systems.

Thank you very much for your time and consideration in this matter.  I
appreciate your help:     Bruce D. Shriver
                          Computer Science Department
                          University of Southwestern Louisiana
                          P. O. Box 44330
                          Lafayette, LA 70504
                          (318) 231-6606
                          shriver.usl@rand-relay

------------------------------

Date: 30 Mar 84 1208 EST (Friday)
From: Roli.Wendorf@CMU-CS-A
Subject: Distributed Programs

As part of my thesis, I am collecting information on the behavior of
distributed programs.  I define distributed programs as consisting of
multiple processes.  Thus, multiprocess programs running on uniprocessor
systems would qualify as well.

If you have written, or know of any distributed programs, I would like to
hear from you.  I am specially interested in hearing about distributed
versions of commonly used programs like editors, compilers, mail systems, etc.

Thanks in advance,
Roli G. Wendorf

------------------------------

Date: 30 Mar 84 12:04:36 EST  (Fri)
From: Dana S. Nau <dsn%umcp-cs.csnet@csnet-relay.arpa>
Subject: Re: A call for discussion

        From:  Sal Stolfo <sal@COLUMBIA-20.ARPA>

        This  note  is  a  solicitation of the AI community for cogent
        discussion ...  We hope that all facets will be addressed including:

        - Differences between the kinds of problems encountered in AI and those
        considered more conventional.   (A   simple   answer   in   terms  of
        ``ill-defined'' and ``well-defined'' problems is viewed as a copout.)
        ...

One of the biggest differences involves how well we can explain how we
solve a problem.  The problems that humans can solve can be divided roughly
into the following two classes:

1.  Problems which we can solve which we can also explain HOW to solve.
Examples include sorting a deck of cards, adding a column of numbers, and
payroll accounting.  Any time we can explain how to solve a problem, we can
write a conventional computer procedure to solve it.

2.  Problems which we can solve but cannot explain how to solve (for a
discussion of some related issues, see Polanyi's "The Tacit Dimension").
Examples include recognizing a face, making good moves in a chess game, and
diagnosing a medical case.  We can't solve such problems using conventional
programming techniques, because we don't know what algorithms to use.
Instead, we use various heuristic approaches.

The latter class of problems corresponds roughly to what I would call AI
problems.

------------------------------

Date: 28 Mar 84 19:25:42-PST (Wed)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: 'Explaining' expert system algorithms
Article-I.D.: uiucdcs.6403

There is no need for expert system software to be well-understood by anyone but
its designers; there IS a need for systems to be able to explain THEMSELVES.
Witness human thinking: after 30 years of serious AI and much more of cognitive
psychology, we still don't know how we think, but we have relatively little
trouble getting people to explain themselves to us.

I think that we will not be able to produce such self-explanatory software
until we come up with a fairly comprehensive theory of our own mental workings;
which is, admittedly, not the same as understanding an expert program. On the
other hand, if you're a theoretical sort you tend to accept Occam's razor, and
so I believe that such a theory of cognition will be as simplifying as the
Copernican revolution was for astronomy. Thereafter it's all variations on a
theme, and expert systems too will one day be correspondingly easy.

                                                                Marcel S.
                                                                U of Illinois

------------------------------

Date: 30 Mar 1984 08:55-PST
From: SSMITH@USC-ECL
Subject: Expert Legal System

Regarding your request in the latest AI-LIST:
George Cross, an assistant prof. at Louisiana State University has
been working for aprox. the last 2 years with a law prof. to formalize
that state's legal codes.  From what I understand, Louisiana uses a
form of law, not found in other states, based on precise rules, rather
than the method of refering to past cases to obtain legal precedence.
I know he has a few unpublished papers in this area and is preparing
a paper for the Austin AAAI.  From what I can tell, the work is similar
in scope to McCarty's work at Rutgers.

George can be contacted over the CS-NET: cross%lsu.csnet@CSNET-RELAY.

    ---Steve Smith (SSmith@USC-ECL)

------------------------------

Date: Thu 29 Mar 84 06:32:32-PST
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM.ARPA>
Subject: Expert Systems/NY Times

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

See front page story of today's New York Times entitled
"Machines Built to Emulate Human Experts' Reasoning".

Features knowledge engineering, expert systems, and Sheldon Breiner,
chairman of Syntelligence.

------------------------------

Date: 03/30/84 14:40:07
From: STORY@MIT-MC
Subject: Theory of the Learnable

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

TITLE:  "A THEORY OF THE LEARNABLE"
SPEAKER:        Professor L.G. Valiant, Harvard University
DATE:   Thursday, April 5, 1984
TIME:   3:45    Refreshments
        4:00    Lecture
PLACE:  NE43-512a

We consider concepts as represented by programs for recognizing them and define
learning as the process of acquiring such programs in the absence of any
explicit programming.  We describe a methodology for understanding the limits
of what is learnable as delimited by computational complexity.  The methodology
consists essentially of choosing some natural information gathering mechanism,
such as the observation of positive examples of the concept, and determining
the class of concepts that can be learnt using it in a polynomial number of
steps.  A probabilistic definition of learnability is introduced that leads to
encouraging positive results for several classes of propositional programs.
The ultimate aim of our approach is to identify once and for all the maximum
potential of learning machines.

HOST:   Professor Silvio Micali

------------------------------

Date: Fri 30 Mar 84 17:28:02-PST
From: Ole Lehrmann Madsen <MADSEN@SU-SCORE.ARPA>
Subject: System Description Languages

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The following course will be given in the spring quarter:

   CS 249 TOPICS IN PROGRAMMING SYSTEMS:
          LANGUAGES FOR SYSTEM DESCRIPTION AND PROGRAMMING

Questions to Ole Lehrmann Madsen, M. Jacks Hall room 214, tlf. 497 - 0364,
net address MADSEN@SU-SCORE.

Listing:  CS 249
Instructor: Ole Lehrmann Madsen
Time: Monday 1:00pm - 4:00pm
Room: 352 building 460

This course will consider tools and concepts for system description and
programming. A number of languages for this purpose will be presented. These
include SIMULA 67, DELTA, EPSILON and BETA, which have been developed as part
of research projects in Norway and Denmark.

SIMULA I was originally developed as a tool for simulation. SIMULA 67 is a
general programming language with simulation as a special application. The
formalization of a system as a SIMULA program often gave a better understanding
of the system than did the actual simulation results.
This was the motivation for designing a special language (DELTA) for making
system descriptions. DELTA is intended for communication about systems. e.g.
data processing, biology, medicine, physics. DELTA among others contains
constructs for describing discrete state changes (by means of algorithms) and
continuous state changes (by means of predicates).  The EPSILON language is
the result of an attemp to formalize DELTA by means of Petri Nets.

BETA is a programming language originally intended for implementing DELTA
descriptions of computer systems. However the project turned into a long-
term project with the purpose of developing concepts, constucts and tools
in relation to programming. The major results of this projetc is the BETA
language. BETA is an object oriented language like SIMULA and SMALLTALK,
but unlike SMALLTALK, BETA belongs to the ALGOL family with respect to
block structure, scope rules and type checking.

Various other languages and topics may also be covered. Examples of this are:
Petri Nets, environments for system description and programming, alternative
languages like Aleph and Smalltalk, implementation issues. Implementaion issues
could be: transformation of a system description to a program, implementation
of a typed language like BETA obtaining dynamic possibilities like in LISP.

Prerequisites

Students are expected to have a basic knowledge of programming languages.
The course may to some extent depend on the background and interests of the
participating students. Students with a background in simulation or description
of various systems within physics, biology, etc. will be useful participants.

Course work

Students will be expected to read and discuss in class various papers
on system description and programming languges. In addition small
exercises may be given.  Each student is supposed to write a short
paper about one or more topics covered by the course and comment on
papers by other students.

------------------------------

End of AIList Digest
********************

∂03-Apr-84  2054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #40
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Apr 84  20:53:57 PST
Date: Mon  2 Apr 1984 21:38-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #40
To: AIList@SRI-AI


AIList Digest            Tuesday, 3 Apr 1984       Volume 2 : Issue 40

Today's Topics:
  Lingistics - "And" and Ellipsis,
  Msc - Notes From a Talk by Alan Kay,
  Seminar - Stereo Vision for Robots
----------------------------------------------------------------------

Date: 30 Mar 84 16:05 EST
From: Denber.wbst@PARC-MAXC.ARPA
Subject: Re: Use of 'and'

""all customers in Ohio and Indiana".  Of course, no customer can be in
both states at once; the question should have been phrased as ".. Ohio
*or* Indiana""

Well, this is actually a case of ellipsis.  "Or" has its own problems.
What is really meant is "all customers in Ohio and [all customers in]
Indiana".  Happens all the time.  Looked at this briefly at the U of R
before I was elided myself.  I don't have my references here.  Some work
at Toronto (?)  Perhaps Gary the Dog Modeller can say more.

                        - Michel
                        Speaker to Silicon

------------------------------

Date: Sat, 31 Mar 84  8:22:26 EST
From: Andrew Malis <malis@BBN-UNIX>
Subject: Notes from a talk by Alan Kay (long message)

        [Forwarded from the Info-Atari list by Tyson@SRI-AI.]

  Date: 23 Mar 1984 1214-EST (Friday)
  From: mit-athena!dm@mit-eddie (Dave Mankins )
  Subject: Notes from talk by Alan Kay at MIT

Dr. Alan Kay, one of the developers of Smalltalk and the Xerox Alto, and
currently a Vice President and Chief Scientist at Atari, gave a talk at
MIT yesterday (22 March 1984) titled: "Too many smart people: a personal
view of design in the computer field"

The abstract:

    This talk is about the battle between Form and Content in Design and
    why "being smart" usually causes content to lose.  "Insightful
    laziness" is better because (1) it takes maximum advantage of others
    work and (2) it encourages "rotating" the problem into its simplest
    essence -- often by changing it completely.  In other words: Point
    of view is worth 80 IQ points!

Here are some tidbits gleaned from my notes:

One of the problems with smart people is that they deal with
difficulties by fixing them, rather than taking the difficulty as a
symptom of a flaw in the design, and noticing "a rotation into a new
simplicity."

When preparing his talk he realized that what he wanted to say was
basically inconsistent, that

    1) You should do things over, and
    2) You shouldn't do things over.

"Both of these are true as long as you get the boundary conditions
right."  (There ensues an anecdote about working with Seymour Cray to
get an early CDC6500 up at NCAR.  The 6500 hardware did not normalize
its floating point operations, but that was "okay" because "any sensible
model will converge".  When the NCAR meteorologists (who answer the
question "what will the weather be like?" by looking out the window)
tried to put their models up on the CDC6500, they didn't work.  They
insisted that the Fortran compiler do the normalization for them.  Kay
cited this as evidence that their model was wrong.  Hmph, it's easy to
make fun of meteorologists...)

Kay cited Minsky's Turing award lecture, in the Apr. 1970 JACM (or maybe
CACM, I didn't catch it): "Form and content aren't enough."  What has
happened to the computer science field over the last twenty years is
myopia:  "a myopia so acute that only the very brilliant can achieve
it."

As an example of this, Kay cited the decline from the STS940 in 1965 to
UNIX ("a mere shadow of what an operating system should be") to CPM.  The
myopia in question is best illustrated by a failure of Kay's own: "When
we got our first IMSAI (mumble) we put Smalltalk up on it.  We had to do
a lot of machine coding on it, and we thought that wasn't right.  And it
performed about as well as BASIC does today.  We said 'This is clearly
inadequate.  What we need is 2Mb of memory and a fast disk.'  Thus we
left the door open for BASIC to crawl back out of its crypt."

He should be lynched.  At least he realizes the error of his ways.

He cited an article by Vannevar Bush, in a 1945 Atlantic Monthly,
titled, "As we may think", in which Bush described a multi-screened,
pointer-based system with access to the world's libraries, drawing
programs, etc.  Bush, of course, thought it was just a few years away
(he called it "Memex").

He alluded to Minsky's notion of "science-envy": Natural scientists look
at the universe and discover its laws.  Computer scientists make up
their universes.  "What we do is more like an art."  "You can judge
whether or not a field is overcome by science-envy if it sticks the word
'science' into its name: 'computer science', 'cognitive science',
'political science'..."

He talked about some of his early work, with Ed Teitel, developing an
early personal computer (ca. 1965) calligraphic display with a pointer.
It had "a wonderful language I developed, influenced by Sutherland's
Sketchpad (the best thesis ever done in computer science) and
Simula--everything I've ever done has been influenced by Sketchpad and
Simula).  Everyone who tried to use it hated it.  They all had about the
same reaction to it that everyone has to APL today."  Shortly after
working on this he saw Papert's work with LOGO and children, and
resolved that everything he did from that day forth would be
programmable by children.

Part of the machine's problem stemmed from the fact that it didn't have
enough memory.  This in turn stems from the fact that we cast hardware
in concrete before we know what we're going to do with it.

Some relevant maxims from my notes:

    "Hardware is software crysallized early."
    "We shouldn't try to build a supercomputer until we have something
        to compute."

His point in these two maxims was, I think, that we're very good at
building hardware before we really know what we're going to do with it
(is there a lesson here for Project Athena with its tons of Ethernetted
VAXes "which will be used for undergraduate education" but a lack of
vision when it comes to educational software?)

He then described the Dynabook: a note-book sized interactive computer,
with about the same kind of interface as a notebook: you can doodle with
it, scribble, but it can also peruse the whole Library of Congress, as
well as past doodles.  "So portable you can carry something else, too."
[For a more complete description of Dynabook, see ``Fanatic Life and
Symbolic Death among the Computer Bums'', in "Two Cybernetic Frontiers"
by Stewart Brand.]

[An aside: one of the proposed forms of the Dynabook was a Walkman with
eyeglass flat-screen stereoptic displays (real 3-d complete with hidden
surfaces!).  This was punted because "no one would want to put something
on their head."  (Times change.)  Kay asserted that such displays ought
to be easier to produce than a note-book sized display, since there
would be fewer picture-elements required (a notebook would require maybe
1.5M pixels, while "the human eye can resolve only 140,000 points, so
you'd only have to put 140,000 pixels into your eyeglasses".  The flaw
in this argument is that most of those points the eye can resolve are in
the fovea, and you would have to put foveal-resolution over the entire
field of the glasses, meaning, more pixels.  This is the opposite of
window-oriented displays.  Instead of a cluttered desk you have an
orderly bulletin-board: just display everything at once, the user
can look around the room at all the stuff.  If this room isn't enough
you can walk into the next room and look at more stuff.]

More maxims:
    "Great ideas are better than good ones because they both take about
    the same amount of time to develop and the great ideas aren't
    obsolete when you're done."

An observation:
    "In all the years that we had the Altos no one at Xerox ever
    designed anything by starting with a drawing on an Alto.  They
    always started with a sketch on the back of an envelope."
    Nicholas Negroponte and the Architecture Machine (ArcMac) group
    did the only study of what sketching is and what really is going
    on when you sketch in 1970 in a project called "Architecture by
    yourself" but their funding dried up and no one remembers that
    stuff now.

    [An aside: the Macintosh's MacPaint program is the best drawing
    program that Kay has ever seen.  (The Macintosh people called him
    up one day and said, "Come on over, we have a present for you.")
    When he started playing with it he had a two-fold reaction:
    "Finally", and "Why did it take 12 years?"]

Homage was paid to the Burroughs B5000, a computer developed in 1961:

    It's operating system was entirely written in a higher level
        language (ALGOL)
    It had hardware protection (which was later recognized to be
        a capability protection system)
    It had an object-oriented virtual memory system
    It had virtual data
        (any data reference could have a procedure attached to it for
        fetching and storing the real data--a bit was set as to which
        side of the assignment statement it went on)
    It was a multiprocessor (it had two processors, and much of the
        protection scheme was built in order to allow the two processors
        to work together).
    It had an integrated stack (which, sadly, is the only thing that
        people seem to remember).

"This was twenty years ago!  What happened, people?"

The B5000 had some flaws:
    The virtual data wasn't done right
        there were too many architectural assumptions about physical data
        formats
    "Char mode: which eliminated all the protections."  This was
        provided to let programmers used to the 1401 (I think) be
        comfortable.

User interface observations:

Piaget's three stages of development:

    Doing ----> Images -----> Symbols

doing: "a hole is to dig"
images: "getting the answer wrong in the water glass experiment"
symbols: "so we can say things that aren't true"

Brunner did a study that indicated these weren't stages, they were three
areas conflicting for dominance--as we mature, symbols begin to win out.

Ha...man did a study of inventiveness and creativity among
mathematicians and discovered that most mathematicians do their work
imagistically, very few of them work by manipulating symbols.  Some
mathematicians (notably Einstein) actually have a kinesthetic ability to
FEEL the spaces they are dealing with.

From psychology comes a principle applicable to user interfaces:

Kay's law: Doing with Images generates Symbols.

He cites Papert's "Mindstorms", where Papert describes programming a
computer to draw a circle.  A high school student, working with BASIC
would have before her the dubious assertion that a circle and
x**2+y**2=C are related.  A child, instructed to "play turtle" will
close her eyes while walking in a circle and say "I move forward a
little, then I turn a little, and I keep doing that until I make a
circle".  This is how a differential geometer views a circle.  Papert's
whole book is an illustration of Kay's Law.

User interface maxims:
    Immediacy
        What you see is what you get (WYSIWYG)
    Modeless
        Always be able to start a new command without having to clean up
        after the old one.
    Generic
        What works in one place works in another
    User illusion
        User's make models of what goes on inside the machine.  Make the
        system in which most of the user's guesses are valid.  Not "some
        of the time it's wonderful, but most of the time you get
        surprised."
    Communicative
        He drew the distinction between reactive systems and interactive
        systems.  All his systems have been reactive--you would do
        something, and the system would react, opening up new
        possibilities.
    Undoability
        Even if it doesn't do much, if you never lose a character, your
        users will be happy.
    Functional
        "What will it do without user programming."

        He didn't used to think this was a user interface issue until he
        saw the STAR, which has the world's best user interface, except
        that it doesn't DO anything.  Not many people can affort a 17000
        coffee-warmer.
    Fun
        One should be able to fool around with no goal.  A user
        interface should be like Disneyland

"Language is an extension of gestures--you're not really trying to say
something, you're really trying to point to something that is in someone
else's head.  A good mime can convey a message without a single word."

A model he encourages people to pursue is that of the AGENT.  When you
go into a library, you don't expect an oracle, you expect someone who
knows how to find what you're looking for.  It is much easier to make an
expert about the terrain of knowledge than an expert that can deal with
the knowledge itself.

He then played a videotape of a "telephone answering machine" being
developed by ArcMac (with funding from Atari).  It listened to the
pattern of a person's speech (in order to figure out when the person was
pausing long enough to be politely interrupted) and then channelled the
conversation into a context (that of taking a message) that the machine
could deal with.  It has a limited speech recognition ability, which
allows its owner to leave messages for other people:

    Hello, this is Doug's telephone, Doug isn't in right now, can I tell
    him who called?

    Uh, Clem...

    If you'd like to leave Doug a message, I can give it to him, otherwise
    just hang up and I'll tell him you called.

    Doug, I'm going to be in town next Tuesday and I'd like to get
    together with you to discuss the Memory project....

    Thank you, I'll tell him you called.

and

    Hello, this is Doug's telephone, Doug isn't in right now, can I tell
    him who called?

    It's me...

    Hi, Doug, you have three messages.

    Who are they from?...

    One is from UhClem, one is from Joe, and you have a mail message
    from Bill about the Future Fair.

    Tell me what UhClem has to say...

    [The machine plays a recording of Clem's message]

    Take a message for UhClem...

    Recording.

    Dinner next Tuesday is fine, how about Mary Chung's?

And so on.  UhClem calls later, and the machine plays back the recording
of Doug's message.


POINT OF VIEW IS WORTH 80 IQ POINTS:

    "A couple of years after Xerox punted the Alto, I met the people who
    made that decision.  They weren't dunces, as I had originally
    supposed, they just didn't have the right point of view: they had no
    criteria by which to tell the difference between an 8080 based word
    processor and a personal computer."

------------------------------

Date: 2 Apr 1984  12:18 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Stereo Vision for Robots

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Keith Nishihara    --  "Stereo Vision for Robots"

AI Revolving Seminar
Wednesday, April 4 at 4:00pm    8th floor playroom

   Recently we have begun, after a long interlude, to bring vision and
manipulation together at the MIT Artificial Intelligence Laboratory.
This endeavor has highlighted several engineering issues for vision:
noise tolerance, reliability, and speed.  I will describe briefly
several mechanisms we have developed to deal with these problems in
binocular stereo, including a high speed pipelined convolver for
preprocessing images and an "unstructured light" technique for improving
signal quality.  These optimizations, however, are not sufficient.  A
closer examination of the problems encountered suggests that broader
interpretations of both the binocular stereo problem and of the
zero-crossing theory of Marr and Poggio are required.
   In this talk, I will focus on the problem of making primitive surface
measurements; for example, to determine whether or not a specified
volume of space is occupied, to measure the range to a surface at an
indicated image location, or to determine the elevation gradient at that
position.  In this framework we make a subtle but important shift from
the explicit use of zero-crossing contours (in band-pass filtered
images) as the elements matched between left and right images, to the
use of the signs between zero-crossings.  With this change, we obtain a
simpler algorithm with a reduced sensitivity to noise and a more
predictable behavior.  The PRISM system incorporates this algorithm with
the unstructured light technique and a high speed digital convolver.  It
has been used successfully by others as a sensor in a path planning
system and a bin picking system.

------------------------------

End of AIList Digest
********************

∂03-Apr-84  2141	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #41
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Apr 84  21:41:15 PST
Date: Tue  3 Apr 1984 19:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #41
To: AIList@SRI-AI


AIList Digest           Wednesday, 4 Apr 1984      Volume 2 : Issue 41

Today's Topics:
  Physiognomic Awareness - Request,
  Waveform Analysis - IBM EKG Program Status,
  Logic Programming - Prolog vs. Pascal & IBM PC/XT Prolog Benchmarks,
  Fifth Generation - McCarthy Review
----------------------------------------------------------------------

Date: Sun, 1 Apr 1984  01:00 EST
From: RMS.G.DDS%MIT-OZ@MIT-MC.ARPA
Subject: Physiognomic Awareness and Ergonomic Design


        Physiognomic awareness and relation to Ergonomic design inquiry

        Does anyone know of any studies conducted on this exact topic ?
        Or as close as can be ! I am interested in whether this has been
        explored.

------------------------------

Date: 26 Feb 84 0:05:42-PST (Sun)
From: hplabs!sdcrdcf!akgua!mcnc!ecsvax!hsplab @ Ucb-Vax
Subject: computer EKG
Article-I.D.: ecsvax.2050

I would like to footnote Jack Buchanan's note and add that IBM, who helped
support the original development of the Bonner program has announced that
effective June, 1984, it will close its Health Care Division, which
currently manufactures its EKG products.  Support for existing products
will continue for at least seven years after product termination, however.

David Chou
Department of Pathology
University of North Carolina, Chapel Hill
      ...!mcnc!ecsvax!hsplab

------------------------------

Date: Mon 2 Apr 84 21:34:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Prolog vs. Pascal

The latest issue of IEEE Graphics (March '84) has an article
comparing interpreted Prolog with compiled Pascal for a graphics
application.  The results are surprising to me.

J.C. Gonzalez, M.H. Williams, and I.E. Aitchison of Heriot-Watt
University report on the comparison in "Evaluation of the
Effectiveness of Prolog for a CAD Application."  They
implemented a very simple set of graphical routines: much of
the code is given in the article.  They were building a 2-D
entry and editing system where the polygons were stored as lists
of vertices and edges.  The user could enter new points and
edit previously entered figures.  This formed the front end
to a system for constructing 3-D models from orthogonal 2-D
orthographic projections (engineering drawings).  Much of the
code has the flavor of "For each line (or point or figure)
satisfying given constraints, do the following ..."  (Often only
one entity would satisfy the constraints.)

The authors report that the Prolog version (using assert and
retract to manipulate the database) was more concise, more readable,
and clearer than the Pascal version.  The Prolog version also took
less storage, was developed more quickly, and was developed with
minimum error.  What is more remarkable is that the interpreted
Prolog ran about 25% faster than the compiled Pascal.

They were using a PDP-11/34 with the NU7 Prolog interpreter
from Edinburgh and the VU Pascal compiler from Vrije University.

------------------------------

Date: Fri 23 Mar 84 10:32:55-PST
From: Herm Fischer <HFischer@USC-ECLB>
Subject: IBM PC/XT Prolog Benchmarks

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

[...]

IBM was kind enough to let us have PC/IX for today, and we
brought up UNSW Prolog.  With a minor exception the code
and makefiles were compatible with PC/IX.  (They have frustrated
me for a whole year, being incompatible every PCDOS "C" compiler
from Lattice onward.)

PC/IX and Prolog are neatly integrated; all Unix features, and
even shell calls, can be made within the Prolog environment.
Even help files are included.  It is kind of nice to be tracing
away and browse and modify your prolog code within the interpretive
environment, using the INed (nee rand) editor and all the other
Unix stuff.

The 64 K limitation of PC/IX bothers me, more emotionally than
factually, because only one of my programs couldn't be run today.
I'm sure I will get really upset unless I find some hack around
this limitation.

A benchmark really surprises me.  The Zebra problem (using
Pereira's solution) provides the following statistics:

DEC-2040      6 seconds (if compiled)      (Timed on TOPS-20)
             42 seconds (if interpreted)   (  "   "    "    )

VAX-11/780  204 secs (interpreted) (UNSW)  (Timed on Unix Sys III)

IBM PC/XT   544 secs (interpreted) ( " )   (Timed on   "   "   " )

The latter 2 times are wall-clock with no other jobs or users
running, and these two Prologs were compiled from the same source
code and make file!  The PC/IX was CPU-bound, and its disk never
blinked during the execution of the test.

-- Herm Fischer

------------------------------

Date: Wed 21 Mar 84 20:47:07-PST
From: Ramsey Haddad <HADDAD@SU-SCORE.ARPA>
Subject: fifth generation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

For anyone interested in these things, there is a review by John
McCarthy of Feigenbaum and McCorduck's "The Fifth Generation:
Artificial Intelligence and Japan's Computer Challenge to the World"
in the April 1984 issue of REASON magazine.


[The following is a copy of Dr. McCarthy's text, reprinted with
his permission. -- KIL]


The Fifth Generation - Artificial Intelligence and Japan's Computer
Challenge to the World - by Edward Feigenbaum and Pamela McCorduck,
Addison-Wesley Publishing Co.


Review of Feigenbaum and McCorduck - for Reason


	Japan has replaced the Soviet Union as the world's second
place industrial power.  (Look at the globe and be impressed).
However, many people, Japanese included, consider that this success
has relied too much on imported science and technology - too much for
the respect of the rest of the world, too much for Japanese
self-respect, and too much for the technological independence needed
for Japan to continue to advance at previous rates.  The Fifth
Generation computer project is one Japanese attempt to break out of
the habit of copying and generate Japan's own share of scientific and
technological innovations.

	The idea is that the 1990s should see a new generation of
computers based on "knowledge information processing" rather than
"data processing".  "Knowledge information processing" is a vague term
that promises important advances in the direction of artificial
intelligence but is noncommittal about specific performance.  Edward
Feigenbaum describes this project in The Fifth Generation - Artificial
Intelligence and Japan's Computer Challenge to the World, predicts
substantial success in meeting its goals, and argues that the U.S.
will fall behind in computing unless we make a similar coherent
effort.

	The Fifth Generation Project (ICOT) is the brainchild of
Kazuhiro Fuchi of the Japanese government's Electro-Technical
Laboratory.  ICOT, while supported by industry and government, is an
independent institution.  Fuchi has borrowed about 40 engineers and
computer scientists, all under 35, for periods of three years, from
the leading Japanese computer companies.  Thus the organization and
management of the project is as innovative as one could ask.  With
only 40 people, the project is so far a tiny part of the total
Japanese computer effort, but it is scheduled to grow in subsequent
phases.

	The project is planned to take about 10 years,during which
time participants will design computers based on "logic programming",
an invention of Alain Colmerauer of the University of Marseilles in
France and Robert Kowalski of Imperial College in London, and
implemented in a computer programming language called Prolog.  They
want to use additional ideas of "dataflow" developed at M.I.T.  and to
make machines consisting of many procesors working in parallel.  Some
Japanese university scientists consider that the project still has too
much tendency to look to the West for scientific ideas.

	Making parallel machines based on logic programming is a
straightforward engineering task, and there is little doubt that this
part of the project will succeed.  The grander goal of shifting the
center of gravity of computer use to the intelligent processing of
knowledge is more doubtful as a 10 year effort.  The level of
intelligence to be achieved is ill-defined.  The applications are also
ill-defined.  Some of the goals, such as common sense knowledge and
reasoning ability, require fundamental scientific discoveries that
cannot be scheduled in advance.

	My own scientific field is making computer programs with
common sense, and when I visited ICOT, I asked who was working on the
problem.  It was disappointing to learn that the answer was "no-one".
This is a subject to which the Japanese have made few contributions,
and it probably isn't suited to people borrowed from computer
companies for three years.  Therefore, one can't be optimistic that
this important part of the project goals will be achieved in the time
set.

	The Fifth Generation Project was announced at a time when the
Western industrial countries were ready for another bout of viewing
with alarm; the journalists have tired of the "energy crisis" - not
that it has been solved.  Even apart from the recession, industrial
productivity has stagnated; it has actually declined in industries
heavily affected by environmental and safety innovations.  Meanwhile
Japan has taken the lead in automobile production and in some other
industries.

	At the same time, artificial intelligence research was getting
a new round of publicity that seems to go in a seven-year cycle.  For
a while every editor wants a story on Artificial Intelligence and the
free lancers oblige, and then suddenly the editors get tired of it.
This round of publicity has more new facts behind it than before,
because expert systems are beginning to achieve practical results,
i.e. results that companies will pay money for.

	Therefore, the Fifth Generation Project has received enormous
publicity, and Western computer scientists have taken it as an
occasion for spurring on their colleagues and their governments.
Apocalyptic language is used that suggests that there is a battle to
the death - only one computer industry can survive, theirs or ours.
Either we solve all the problems of artificial intelligence right away
or they walk all over us.

	Edward Feigenbaum is the leader of one of the major groups
that has pioneered expert systems -- with programs applicable to
chemistry and medicine.  He is also one of the American computer
scientists with extensive Japanese contacts and extensive interaction
with the Fifth Generation Project.

	Pamela McCorduck is a science writer with a previous book,
Machines Who Think, about the history of artificial intelligence
research.

        The Fifth Generation contains much interesting description
 of the Japanese project and American work in related areas.  However,
Feigenbaum and McCorduck concentrate on two main points.  First,
knowledge engineering will dominate computing
by the 1990s.	Second, America is in deep trouble if we don't
organize a systematic effort to compete with the Japanese in this
area.

	While knowledge engineering will increase in importance, many
of its goals will require fundamental scientific advances that cannot
be scheduled to a fixed time frame.  Unfortunately, even in the United
States and Britain, the hope of quick applications has lured too many
students away from basic research.  Moreover, our industrial system
has serious weaknesses, some of which the Japanese have avoided.  For
example, if we were to match their 40 engineer project according to
output of our educational system, our project would have 20 engineers
and 20 lawyers.

	The authors are properly cautious about what kind of an
American project is called for.  It simply cannot be an Apollo-style
project, because that depended on having a rather precise plan in the
beginning that could see all the way to the end and did not depend on
new scientific discoveries.  Activities that were part of the plan
were pushed, and everything that was not part of it was ruthlessly
trimmed.  This would be disastrous when it is impossible to predict
what research will be relevant to the goal.

	Moreover, if it is correct that good new ideas are more likely
to be decisive in this field at this time than systematic work on
existing ideas, we will make the most progress if there is money to
support unsolicited proposals.  The researcher should propose goals
and the funders should decide how he and his project compare with the
competition.

	A unified government-initiated plan imposed on industry has
great potential for disaster.  The group with the best political
skills might get their ideas adopted.  We should remember that present
day integrated circuits are based on an approach rejected for
government support in 1960.  Until recently, the federal government
has provided virtually the only source of funding for basic research
in computer technology.  However, the establishment of
industry-supported basic research through consortia like the
Microelectronics and Computer Technology Corporation (MCC), set up in
Austin, Texas under the leadership of Admiral Bobby Inman, represents
a welcome trend--one that enhances the chances of making the
innovations required.

------------------------------

End of AIList Digest
********************

∂04-Apr-84  1707	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #42
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Apr 84  17:06:49 PST
Date: Wed  4 Apr 1984 15:39-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #42
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Apr 1984      Volume 2 : Issue 42

Today's Topics:
  AI Tools - Lisp Eigenanalysis Package Request,
  Automata - PURR-PUSS References & Cellular Automata Request,
  AI Publications - SIGBIO Newsletter,
  Expert Systems - Nutrition System Request & Recipe Planner &
      Legal Reasoning Systems,
  AI Programming - Discussion
----------------------------------------------------------------------

Date: 30 Mar 84 11:07:45-PST (Fri)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!eneevax!phaedrus @ Ucb-Vax
Subject: Lisp equivalent of Linpack info wanted
Article-I.D.: eneevax.103

I was wondering if anybody knows of any packages in lisp that does the same
thing that linpack does (ie. finding eigenvalues, eigenvectors etc.).
But it must do it fast.

My problem is that I need to do some linear algebra stuff but I need to
be able to load it into vaxima (MACSYMA on a VAX running 4.1BSD.  If you
have any suggestions I would be very grateful.

                                Thanks
                                Pravin Kumar

>From the contorted brain, and the rotted body of THE SOPHIST

ARPA:   phaedrus%eneevax%umcp-cs@CSNet-Relay
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!eneevax!phaedrus

------------------------------

Date: 4 Apr 84 21:37:21-EST (Wed)
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Request for PURR-PUSS reference

Recently someone mentioned a system called PURR-PUSS (I think it was Ken
Laws), in connection with determining the configuration of a finite
state machine based on observation of input-output relationships.  I'm
doing some work related to that, and would appreciate references to
PURR-PUSS.

     Tom Portegys, Bell Labs Naperville, Ill., ihuxv!portegys

[I ran across PURR-PUSS in J.H. Andreae's "PURR-PUSS: Purposeful
Unprimed Rewardable Robot", Electrical Engineering Report No. 24,
Sept. 1974, Man-Machine Studies, Progress Report UC-DSE/4(1974) to
the Defence Scientific Establishment, Editor J.H. Andreae, Dept.
of EE, Univ. of Canterbury Christchurch, New Zealand, pp. 100-150.
This article describes several applications of the PUSS (Predictor
Using Slide and Strings) learning program, including the identification
of a repetition pattern in a seemingly random H/T sequence.  (The
pattern was two random choices followed by a repeat of the second choice.)
References are given to earlier reports in this series.
I also have copies of reports 26, 27, and 28 (Sep. 1975);  each has
at least one article on the use of PUSS learning/predicting modules
for the reasoning component in some application.  -- KIL]

------------------------------

Date: 28 Mar 84 12:30:21-PST (Wed)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Cellular Automata -- Request for pointers
Article-I.D.: dartvax.1024

I was fascinated by the description of "cellular automata" in last
month's Computer Recreations section of Scientific American.  The mass
of interacting parallel processes that are described there seem
singularly appropriate for the simulation of phenomona of interest to
AI workers.  With complex enough rules of interaction between elements
it seems one could simulate neurons in the brain or the evolutionary
process.  I'm aware, from a course I took long ago in Cognitive
Psychology, that psychologists use dynamically interacting models of
this sort.

  This note is to request pointers to any research that's currently
being done in this area, specifically as it relates to AI.

Thanks in advance,

      --Lorien Y. Pratt
        Dartmouth College Library
        Hanover, NH  03755

        decvax!dartvax!lorien

------------------------------

Date: Wed, 4 Apr 84 00:40:26 EST
From: "Roy Rada"@UMich-MTS.Mailnet
Subject: SIGBIO Newsletter

[...]

As Editor of the ACM SIGBIO Newsletter, I would
like to publish material on AI in Medicine.  My
own research focuses on continuity in knowledge
acquisition for medical expert systems.  Please
send me whatever you feel might be relevant.

--Roy Rada

------------------------------

Date: 3 Apr 1984 1630-PST
From: Nitsan Har-gil <har-gil@USC-ISIF>
Subject: Expert System for Nutrition

Does anyone know of expert systems dealing with Nutrition (Food, etc.).
Something that you can give a typical daily menu and will respond with
nutritional deficiencies, etc.  Thanks in advance, Nitsan.

------------------------------

Date: Wed, 4 Apr 84 15:26:23 EST
From: Kris Hammond <Hammond@YALE.ARPA>
Subject: Re: Recipe Planner

[This is a response to a personal query about Kris' work in recipe planning;
he agreed to let me share it with the list.  I would be interested in hearing
about other recipe-based systems, including those in the chemistry domain.
-- KIL]

I have a paper in AAAI83, Planning and Goal  Interaction:   The  use  of
past solutions in present situations.  My work centers around the notion
of  organizing  planning  knowledge  around  the  interactions   between
features rather  than  around  individual  features  themselves.  In the
cooking domain this means the planner has to anticipate the interactions
between different  tastes  and  textures, and search for past plans that
have already dealt with this interaction in the past.   The  end  result
is a  system  that  looks  at  an input situation, (a request for a dish
that includes many items and tastes) and tries  to  find  a  recipe  for
an analogous past situation.

The paper  is the analysis of an example which uses knowledge of feature
interaction to 1) analyze the original input, 2)  index  into  a  useful
plan, 3)  suggest  the  type  of  modifications  that have to be made on
that plan, 4) search for problems in the resulting plan and  5)  propose
general solutions to the problems encountered.

I [am now working on] a more general application of the idea of organizing
planning information in terms of goal and plan interaction.  [...]

The cooking paper in on YALE-RES <A.HAMMOND.WORK>WOK.MSS.

Thanks for the interest.

Kris Hammond

------------------------------
Date: 1 Apr 84 10:28:53 CST (Sun)
From: ihnp4!utcsrgv!dave@Berkeley
Subject: expert systems and legal reasoning

A recent request asked for information about expert systems and legal
reasoning. I suggest anyone interested in that field get onto Law:Forum,
a discussion group running under CONFER on an MTS system at Wayne
State University in Michigan. Access is free, with computer charges
and Telenet charges being picked up by the Markle Foundation grant
which is funding the project. Most of the major people involved
in developing legal reasoning systems (Thorne McCarty, Layman Allen,
Jim Sprowl, several others) are involved in Law:Forum and participate
regularly.

If you want to get onto Law:Forum, and can be reached electronically,
send me your electronic address:
        ihnp4!utcsrgv!dave@BERKELEY             (ARPA)
        dave.Toronto                            (CSnet)
        ihnp4!utcsrgv!dave                      (UUCP)

If you have no electronic address, I can't ship out the access information
to you, so send a letter directly to the conference organizer:
        Prof. Jennifer Bankier
        Dalhousie Law School
        Dalhousie University
        Halifax, Nova Scotia
        Canada                  (sorry, don't have the postal code handy)


Dave Sherman
The Law Society of Upper Canada
Toronto
(416) 947-3466

------------------------------

Date: Sun, 1 Apr 84 21:50:25 cst
From: Georgp R. Cross <cross%lsu.csnet@csnet-relay.arpa>
Subject: Legal AI Systems


We are developing a model of the Louisiana Civil Code
The representation language is called ANF (atomically
Normalized Form) and is being used to develop the
conceptual retrieval and reasoning system CCLIPS (Civil
Code Legal Information Processing System).  Some references
are:

deBessonet, C.G., Hintze, S.J., and Waller, W., "Automated
Retrieval of Information: Toward the Development of a Formal
Language for Expressing Statutes, Southern University Law Review,
6(1), 1-14, 1979.

deBessonet, C.G., "A Proposal for Developing the Structural
Science of Codification," Rutgers Journal of Computers,
Technology and the Law, 1(8), 47-63, 1980.

deBessonet, C.G., "An Automated Approach to Scientific
Codification," Rutgers Computer and Technology Law Journal,
9(1), 27-75, 1982.

deBessonet, C.G.
"An Automated Intelligent System Based on a Model of a Legal
System," Rutgers Journal of Computers, Technology, and the
Law, 10, to appear, 1983.

Technical Reports:

83-011 Formalization of Legal Information
83-023 Natural Language Generation for a Legal Reasoning
       System
83-002 Processing and Representing Statutory Formalisms
84-006 Representation of Some Aspects of Legal Causality
83-005 Representation of Legal Knowledge

Copies of the above Technical Reports may be requested from
<techrep%lsu@csnet-relay> or from

      Technical Reports Secretary
      Department of Computer Science
      Louisiana State University
      Baton Rouge, LA  70803-4020


George R. Cross              Cary G. deBessonet
<cross%lsu@csnet-relay>      <debesson%lsu@csnet-relay>

------------------------------

Date: Tue, 3 Apr 84 13:35 PST
From: DSchmitz.es@Xerox.ARPA
Subject: Legal AI research

For all who requested, I am maintaining a copy of all the responses to
my request for information about ongoing AI research in legal-related
fields in the following file:  [Oly]<DSchmitz>LegalAI.mail

There are about 15 responses in there now (including those who asked to
be copied on the responses) and I will be adding any new ones I receive
as they arrive.

David

------------------------------

Date: 1 Apr 84 22:35:06 EST
From: Louis Steinberg <STEINBERG@RUTGERS.ARPA>
Subject: Re: Stolfo's call for discussion

One way AI programming is different from much of the programming in other
fields is that for AI it is often impossible to produce a complete set of
specifications before beginning to code.

The accepted wisdom of software engineering is that one should have a
complete, final set of specifications for a program before writing a
single line of code.  It is recognized that this is an ideal, not
typical reality, since often it is only during coding that one finds
the last bugs in the specs.  However, it is held up as a goal to
be approached as closely as possible.

In AI programming, on the other hand, it is often the case that an
initial draft of the code is an essential tool in the process of
developing the final specs.  This is certainly the case with the
current "expert system" style of programing, where one gets an expert
in some field to state an initial set of rules, implements them, and
then uses the performance of this implementation to help the expert
refine and extend rules.  I would argue it is also the case in fields
like Natural Language and other areas of AI, to the extent that we
approach these problems by writing simple programs, seeing how they
fail, and then elaborating them.

A classic example of this seems to be the R1 system, which DEC uses to
configure orders for VAXen.  An attempt was made to write this program
using a standard programming approach, but it failed.  An attempt was
then made using an expert system approach, which succeeded.  Once the
program was in existence, written in a production system language, it
was successfully recoded into a more standard programming language.
Can anyone out there in net-land confirm that it was problems with
specification which killed the initial attempt, and that the final
attempt succeeded because the production system version acted as the
specs?

------------------------------

End of AIList Digest
********************

∂05-Apr-84  2050	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #43
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 Apr 84  20:50:17 PST
Date: Thu  5 Apr 1984 19:21-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #43
To: AIList@SRI-AI


AIList Digest             Friday, 6 Apr 1984       Volume 2 : Issue 43

Today's Topics:
  Nonmonotonic Logic - Reference Request,
  AI Applications - Algebra and Geometry on the IBM-PC,
  News - Computer Ethics Prize,
  Linguistics - Use of "and",
  AI Computing - Software Engineering,
  Reading List on Logic and Parallel Computation
  Seminars - Automating Shifts of Representation &
    Internalized World Knowledge &
    Linguistic Structuring of Concepts &
    Protocol Analysis
----------------------------------------------------------------------

Date: 4 Apr 84 18:20:23 EST  (Wed)
From: Don Perlis <perlis%umcp-cs.csnet@csnet-relay.arpa>
Subject: nonmonotonic reference request


                        BIBLIOGRAPHY ON NON-MONOTONIC LOGIC


I  am  compiling  a  bibliography of literature on nonmonotonic logic, to be
made available to the AI community, and in particular  to  the  workshop  on
non-monotonic  reasoning  that  will take place in October in New Paltz, New
York.

I  would  greatly  appreciate  references  from  the  AI  community, both to
published and  unpublished  material  (the  latter  as  long  as  it  is  in
relatively  completed  form  and copies are available on request).  Material
can be sent to me at perlis@umcp-cs and also by post to  D. Perlis, Computer
Science Department, University of Maryland, College Park, MD 20742.

Thanks in advance for your cooperation.

------------------------------

Date: 5 Apr 84 15:16:43 PST (Thursday)
From: Cornish.PA@Xerox.ARPA
Subject: Application of LISP programs to Math Ed Software

I am interested in AI programs in the following areas listed below.
Could someone provide me with pointers to the significant work done in
these areas? Could someone advise me whether work done in these areas
could feasibly run on existing Lisp systems for the IBM-PC.  "feasibly
run" means that the programs would be  responsive enough to form the
basis of a math ed product.

1. Solution of Algebra word problems
2. Analysis of proofs in plane Geometry


Thank you very much,

Jan Cornish

------------------------------

Date: Wed 4 Apr 84 17:22:09-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Computer Ethics

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


         COMPETITION ANNOUNCEMENT:  THE METAPHILOSOPHY PRIZE

     METAPHILOSOPHY will  award a  prize  of  $500  to the  author who
submits the best essay in computer ethics between January 1, 1984, and
December 31, 1984.  The prize-winning  essay will be published as  the
lead article in  the April 1985 issue of METAPHILOSOPHY, which will be
devoted entirely  to computer  ethics.  Other  high-quality  essays in
computer ethics will be accepted for publication in the same issue.  A
panel of experts in computer ethics will select the winners.  To enter
the competition, send four copies of your essay to:

                      Terrell Ward Bynum
                      Editor, Metaphilosophy
                      Metaphilosophy Foundation
                      Box 32
                      Hyde Park, NY  12538

     Readers unfamiliar  with  the  field of  computer  ethics  should
consult the January  1984 issue of  METAPHILOSOPHY.  Those  unfamiliar
with specifications  for  manuscript preparation  should  consult  any
recent issue.

------------------------------

Date: 04 Apr 84  11:14:01 bst
From: J.R.Cowie%rco@ucl-cs.arpa
Subject: Use of "and"

There is another way of looking at the statement -
 all customers in Indiana and Ohio
which seems simpler to me than producing the new phrase -
 all customers in Indiana  AND all customers in Ohio
instead of doing this why not treat Indiana and Ohio as a new single
conceptual entity giving -
 all customers in (Indiana and Ohio).

This seems simpler to me. It would mean the database would have to
allow aggregations of this type, but I don't see that as being
particularly problematic.

Have I missed some subtle point here?

Jim Cowie.

------------------------------

Date: 5 April 1984 0949-cst
From: Dave Brown    <DBrown @ HI-MULTICS>
Subject: Re: Stolfo's call for discussion

  A side point about Louis Steinberg's response: The accepted wisdom
is actually that AI and plain commercial programming has shown that
specification in complete detail is really just mindless hacking, by
a designer rather than a hack.
  *However* the salesmen of "software engineering methodologies" are
just getting up to about 1968 (the first sw eng conference), and
are flogging the idea that perfect specifications are possible
and desirable.
  Therefore the state of practice lags behing the state of the art
an unconsciousable distance....
  AI leads the way, as usual.

  --dave (software engineering ::= brilliance | utter stupidity) brown

------------------------------

Date: 05 Apr 84  1711 PST
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Reading List on Logic and Parallel Computation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

INSTRUCTOR: Professor G. Kreisel
TIME:   Monday  4:15-6pm
PLACE:  252 Margaret Jacks Hall
       (Stanford Computer Science Department)
TOPIC:  Logic and parallel computation.

Below is a reading list that was compiled from discussion
at the organizational meeting.  [...]


          --------------------------------------------------
                             Reading List
          --------------------------------------------------

[Carolyn Talcott - 362 Margaret Jacks - CLT@SU-AI - has copies
 of all the references]


                         Parallel Computation
                        ---------------------

Fortune,S. and Wyllie,J. [1978]
Parallelism in random access machines
Proc. 10th  ACM Symposium on Theory of Computation (STOC)
pp.114-118.


Valiant,L. Skyum,S.[1981]
Fast parallel computation of polynomials using few processors
Proc. 10th Somposium on Mathematical Foundations of Computer Science
LNCS 118, pp. 132-139.

von zur Gathen,J.[1983]
Parallel algorithms for algebraic problems
Proc. 15th  ACM Symposium on Theory of Computation (STOC)
pp. 17-23.

Mayr,E.[1984], Fast selection on para-computers (slides )

Karp,R.M. Wigderson,A.[1984?]
A Fast Parallel Algorithm for the Maximal Independent Set Problem
  - Extended Abstract (manuscript)


        Continuous operations on Infinitary Proof Trees, etc.
        -----------------------------------------------------

Rabin,M.O.[1969]
Decidability of 2nd Order Theories and Automata on Infinite Trees,
TransAMS 141, pp.58-68.

Kreisel,G. Mints,G.E. Simpson,S.G.[1975]
The Use of Abstract Language in Elementary Metamathematics;
  Some Pedagogic Examples,
in Logic Colloquium72, LNM 453, pp.38-131.

Mints,G.E.[1975] Finite Investigations of Transfinite Derivations,
J.Soviet Math. 10 (1978) pp. 548-596. (Eng.)

Sundholm,B.G.[1978] The Omega Rule: A Survey,
Bachelors Thesis, University of Oxford

------------------------------

Date: 3 Apr 84 13:00:42 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Automating Shifts of Representation

            [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                      Machine learning brown bag seminar


    Title: Automating Shifts of Representation
    Speaker: P. J. Riddle
    Date: Wednesday, April 11, 1984, 12:00-1:30
    Location: Hill Center, Room 254

       My  thesis  research  deals  with  automatically  shifting  from one
    knowledge representation of a certain problem to another representation
    which is more efficient for the problem class  to  which  that  problem
    belongs.  I believe that "...changes of representation are not isolated
    'eureka'  phenomena  but  rather  can  be  decomposed into sequences of
    relatively minor representation shifts". I am  attempting  to  discover
    primitive representation shifts and techniques for automating them.  To
    achieve  this  goal  I  am  attempting  to  define and automate all the
    primitive  representation  shifts  explored  in  the   Missionaries   &
    Cannibals (M&C) problem.  The main types of representation shifts which
    I  have already identified are: forming macromoves, removing irrelevant
    information, and removing redundant  information.    Initially  I  have
    concentrated  on  a  technique  for automatically acquiring macromoves.
    Macromoves succeed in shifting the problem space to a higher  level  of
    abstraction.    Assuming  that  the macromoves are appropriate for this
    problem class, this will make the problem solver  much  more  efficient
    for subsequent problems in this problem class.

------------------------------

Date: Wed, 4 Apr 84 10:27:46 pst
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Internalized World Knowledge

       [Forwarded from the CSLI bboard by Laws@SRI-AI.]

             BERKELEY COGNITIVE SCIENCE PROGRAM
                        Spring 1984

            IDS 237B - Cognitive Science Seminar

          Time:        Tuesday, April 10, 1984, 11-12:30pm
          Location:    240 Bechtel


              HOW THE MIND REFLECTS THE WORLD
                      Roger N. Shepard
       Department of Psychology, Stanford  University

Through biological evolution,  enduring  characteristics  of
the  world  would  tend  to become internalized so that each
individual would not have to learn  them  de  novo,  through
trial  and possibly fatal error.  The most invariant charac-
teristics are quite abstract: (a) Space  is  locally  three-
dimensional,  Euclidean, and isotropic except for a gravita-
tionally conferred unique upright direction. (b) For any two
positions  of  a  rigid  object, there is a unique axis such
that the object can be most  simply  carried  from  the  one
position  to  the  other  by  a  rotation  around  that axis
together with a translation along it. (c) Information avail-
able  to  us about the external world and about our relation
to it is analyzable into  components  corresponding  to  the
invariants  of  significant  objects,  spatial  layouts, and
events and, also, into components corresponding to the tran-
sitory  dispositions, states, and manners of change of these
and of the self relative to these.   Having  been  internal-
ized,  such  characteristics  manifest themselves as general
laws governing the representation of objects and events when
the  relevant information is fully available (normal percep-
tion), when it is only partially available (perceptual  fil-
ling  in or perceptual interpretation of ambiguous stimuli),
and when it  is  entirely  absent  (imagery,  dreaming,  and
thought).    Phenomena  of  identification,  classification,
apparent motion, and imagined transformation illustrate  the
precision and generality of the internalized constraints.

*****  Followed by a lunchbag discussion with speaker  *****
***  in the IHL Library (Second Floor, Bldg. T-4) from 12:30-2  ***

------------------------------

Date: Wed 4 Apr 84 18:49:45-PST
From: PENTLAND@SRI-AI.ARPA
Subject: Linguistic Structuring of Concepts

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

Issues In Language, Perception and Cognition
WHO: Len Talmy, Cognitive Science Program and German Dept., UC Berkeley
WHEN: Monday April 9, 12:00 noon
WHERE: Room 100, Psychology

                How Language Structures its Concepts

Languages have two kinds of elements: open-class, comprising  the  roots
of  nouns,  verbs,  and adjectives, and closed-class, comprising all in-
flections, particle words, grammatical categories, and the like.  Exami-
nation  of a range of languages reveals that closed-class elements refer
exclusively to certain concepts, and seemingly never to concepts outside
those  (e.g., inflection on nouns may indicate number, but never color).
My idea is that all closed-class elements taken  together  consistute  a
very  special  group:  they  code  for a fundamental set of notions that
serve to structure the conceptual material expressed by language.   More
particularly,   their  references constitute a basic notional framework,
or scaffolding, around which is organized the more contentful conceptual
material  represented by open-class (i.e., lexical) elements.  The ques-
tions to be addressed are: a) Which exactly are the notions specified by
closed-class  elements, and which notions are excluded?  b) What proper-
ties are shared by the included notions and  absent  from  the  excluded
ones?   c) What functions are served by this design feature of language,
i.e., the existence in the first place of  a  division  into  open-  and
closed-class  subsystems,  and  then the particular character that these
have?  d) How does this structuring system specific to language  compare
with  those  in other cognitive subsystems, e.g. in visual perception or
memory?  With question (d), this linguistic investigation opens out into
the  issue  of  structuring within cognitive contents in general, across
cognitive domains.

------------------------------

Date: 4 Apr 1984 12:35:50-PST
From: mis at SU-Tahoma
Subject: S.P.A. - Seminar in Protocol Analysis

     [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                  M. Pavel &  D. Sleeman
    ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

           S.P.A - SEMINAR IN PROTOCOL ANALYSIS
    ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

           Introduction to protocol  analysis:
       an  example  from  developmental psychology.

                       Jean Gascon
                       Stuart Card

              Xerox Palo Alto Research Center

        The first of this series of seminars on protocol analysis
will be structured as a tutorial on protocol analysis and comput-
er simulation.  Stuart Card will give a  brief  overview  of  the
history, motivation and practice of the methodology.  Jean Gascon
will  then  illustrate,  with  a  simple  example,  how  protocol
analysis  is  performed.  The  application  area  will  come from
developmental psychology.  First, protocols of children of  vari-
ous  ages  performing  one  of  Piaget's "seriation" task will be
shown.  We will then explain how one goes from the actual data to
the  construction of the "problem space" (a la Newell and Simon).
The next step consists of regrouping the problem spaces  of  dif-
ferent  subjects  into a more general psychological model (dubbed
BG in this particular case). We will see how the BG language  fa-
cilitates  the  writing of simulation models.  A computer program
that does automatic protocol analysis of the seriation  protocols
will  then  be introduced.  This program provides some additional
insights about the process of protocol analysis itself.   In  the
conclusion  we  will discuss the advantages and inconveniences of
protocol analysis relative to the other  methodologies  available
in cognitive psycholgy.

     ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

           Place:  Jordan Hall, Room 100
           Time:   1:00 pm, Wednesday  April 11, 1984
     ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

------------------------------

End of AIList Digest
********************

∂07-Apr-84  2324	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #44
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Apr 84  23:22:55 PST
Date: Sat  7 Apr 1984 22:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #44
To: AIList@SRI-AI


AIList Digest             Sunday, 8 Apr 1984       Volume 2 : Issue 44

Today's Topics:
  Cellular Automata - References,
  Image Understanding - Expert System for Radiograph Analysis,
  Education - Model AI Curriculum,
  AI Funding - Alan Kay Review & Strategic Computing,
  Seminars - Language Structures Time Change & Automatic Programming
----------------------------------------------------------------------

Date: 1 Apr 84 13:44:53-PST (Sun)
From: decvax!genrad!wjh12!vaxine!pct @ Ucb-Vax
Subject: Re: Cellular Automata
Article-I.D.: vaxine.221

There is a big review article by Wolfram in Reviews of Modern Physics
v. 55 no. 3 p. 601 (July 1983) with a large list of references

------------------------------

Date: 2 Apr 84 18:41:37-PST (Mon)
From: hplabs!hao!seismo!rochester!ur-laser!bill @ Ucb-Vax
Subject: An expert system for reading chest radiographs
Article-I.D.: ur-laser.136

I have developed an "expert" system that analyzes chest  ra-
diographs  for  tumors.   This system was tested on 37 films
that contain nodules.  It is capable of finding the  nodules
in  92%  of the films.  In studies of mass screenings of ra-
diographs by radiologists it was found that the radiologists
miss  25-30%  of  all nodules < 1cm. A Rib Expert determines
whether a candidate nodule (possible tumor) is a rib. A  No-
dule Expert, a linear-discriminant-based pattern recognizer,
classifies candidate nodules. All candidate nodules that are
classified as any type of nodule are presented to a radiolo-
gist for further inspection.  Radiologists can recognize no-
dules  as  such  once  they are pointed out.  If you are in-
terested in this work or want leads to other methods of  au-
tomated  chest  film  analysis,  which are listed in the bi-
bliographies, contact Peggy Meeker ((716)275-7737, {allegra,
seismo}!rochester!peg)  at  the Computer Science Dept at the
University of Rochester and request the following TRs:

Lampeter, W.A.  "Design, tuning, and performance  evaluation
of an automated pulmonary nodule detection system."  TR-120,
Computer Science Department, University  of  Rochester,  Ro-
chester NY, 1983.

Lampeter, W.A.  "Three image experts which help  distinguish
tumors  from  non-tumors,"  TR-123, Computer Science Depart-
ment, University of Rochester, Rochester NY, 1984.

Other works of possible interest:

Ballard, D. H., J. Sklansky.   "Tumor  detection  in  radio-
graphs,"   Computers  in  Biomedical  Research,  6, 299-321,
1973.

Jagoe, J.R., "Reading chest radiographs  for  pneumoconiosis
by computer," Brit. J. Ind. Med., 32, 267-272, 1975.

Toriwaki, J. et.al. Pattern recognition of chest  x-ray  im-
ages. Comp Grap Pat Recog, 2, 252-271, 1973.



Bill Lampeter
Department of Radiology
School of Medicine and Dentistry
University of Rochester
(716) 275-5101 or (716) 275-3194
{seismo, allegra}!rochester!ur-laser!bill

------------------------------

Date: Sat 7 Apr 84 21:30:27-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI Curriculum

The April issue of IEEE Computer discusses computers in education.
The first article is on a model curriculum for Computer Science, and
pages 12-13 describe a sample curriculum for AI.  About 20 references
to suggested AI texts and articles are also given.

                                        -- Ken Laws

------------------------------

Date: 6 Apr 84 21:16:53 PST (Friday)
From: Ron Newman <Newman.es@Xerox.ARPA>
Subject: Alan Kay on DARPA research & Mansfield amendment

Excerpted from an interview in the April 1984 issue of ST.Mac magazine
(Softalk's magazine for the Macintosh).  All [bracketed phrases] are as
in the original.


Alan:  Things haven't been the same in computer science since two things
happened.  The awful thing that happened was the Mansfield amendment in
1969.  The amendment was a congressional reaction to pressure from the
population about the Vietnam war, mostly uninformed pressure.

  What it did was force all military funding to be put under the
scrutiny of Congress and to be diverted only to military-type things.
All of a sudden, everything was different at ARPA [the Advanced Research
Projects Agency].


Q:  ARPA became DARPA (the Defense Advanced Research Projects Agency) at
that point?

Alan: ARPA became DARPA.  The last good thing to be done had already
been funded, which was the Arpanet [a network of communicating computers
around the world that allows scientists to send messages to each other].
That was finished in 1970.  That was the end of ARPA funders [program
directors] being drawn from the ARPA community.

During the golden age at ARPA, the funding was much less than it is now,
but it was wide open.


Q: More creative work was done?

Alan:  Yeah.  Their whole theory--partially because the managers of ARPA
were scientists themselves--was "we fund people, not projects.  If we
can understand what these guys are doing, we should probably be off
doing it ourselves.  We'll just dump half this money for three years and
take our lumps."

They took percentages, like you have to in real research.  And, God, did
they get some great stuff!

~~~~End of excerpt~~~~~


Alan makes lots of other brash statements in this article too.  I'll
leave you with just this one:

  "I'd just as soon send all the engineers around here in Silicon Valley
to the Outback of Australia until they have read something like 'The
Federalist Papers' or Adam Smith's 'Wealth of Nations' or *something*,
for God's sake....what they're doing is actually vandalizing an entire
generation of kids by acting as though things like Basic have value."

------------------------------

Date: 5 Apr 84 17:56:01 PST (Thursday)
From: Ron Newman <Newman.es@Xerox.ARPA>
Subject: Strategic Computing in Electronic News 3/19/84

[personal comment follows at end of article--RN]

"DOD Strategic Computing to get $95M in Funding"
Electronic News, March 19, 1984, page 18
by Lloyd Schwartz

  WASHINGTON (FNS)--A virtual doubling of the funds for the Defense
Department's Strategic Computing initiative in fiscal year 1985--from
$50 million to $95 million--represents the first step in providing
"dramatic new computational capabilities to meet future critical defense
needs," Pentagon officials reported to Congress.

  They said that, as computer capability evolves, "men and computers
will operate as collaborators in the control of complex weapon systems."
It boiled down to, they added, future "wars by computer," with the side
possessing the superior technology prevailing.

  Dr. Robert S. Cooper, director of DOD's Defense Advanced Research
Projects Agency (DARPA), describing the program as well under way,
explained it is using a new idea, employing multiprocessor architecture
to reach for a new generation of computers with as much as 10,000 times
the computing capability of hardware available today.

  The computers, endowed with artificial intelligence, will be capable
of solving extraordinarily complex problems involving human beings,
understanding speech and responding in kind,  Dr. Cooper indicated to
the House Armed Services Committee.  They also will require a whole new
system of prototyping, it was added.

  Dr. Cooper testified that while computers are already widely employed
in defense, current computers have inflexible program logic and are
limited in their ability to adapt to unanticipated enemy actions in the
field.  The problem, he noted, is exacerbated by the increasing pace and
complexity of modern warfare.

  "The Strategic Computing program will confront this challenge by
producing adaptive, intelligent computers specifically aimed at critical
military applications,"  the DARPA chief continued.  "These new machines
will be designed to solve complex problems in reasoning.  Special
symbolic processors will employ expert human knowledge contained in
radical new memory systems to aid humans in controlling the operation of
complex military systems.

  "The new generation computers will understand connected human speech
conveyed to them in natural English sentences, as well as be able to see
and understand visible images obtained from TV and other sensors."

  Dr. Cooper noted DARPA has already demonstrated a limited voice
message system in which a computer recognized and understood human
speech to receive its commands.  The computer was able to respond
verbally, using synthesized speech, although it possesses a limited
vocabulary.

  Another example of technological advancement, Dr. Cooper noted, was
DARPA's recent success in applying a finely-focused ion beam in the
maskless fabrication of integrated circuits.  He said this work is
continuing and "could result in a major breakthrough in ultimately
achieving a large-scale maskless fabrication capability."

  Summing up, the DARPA chief declared "In the future, supercomputers
with reasoning ability and natural language interfaces with military
commanders will be able to participate in military assessment and may be
able to simulate and predict the consequences of various proposed
courses of military action.  This will allow the commander and his staff
to focus on the larger strategic issues, rather than have to manage the
enormous information flow that will characterize the battles of the
future."

  Dr. Cooper added that the balance of military power in the future
"could well depend on successful application of 'superintelligent
computers' to the control of highly-effective advanced weapons."

~~~~~End of Electronic News article~~~~~


Comments:

1.  In the past, defenders of DARPA funded computer research have
asserted that the military and civilian industry have the same goals, so
that what's good for the Pentagon is good for the commercial market too.
But now we have a program whose goal, in the Pentagon's own words, is to
produce "adaptive, intelligent computers ***specifically aimed at
critical military applications***."

  [Sorry if I'm injecting any personal bias here, but this seems to be a
  non sequitur.  Past military research (e.g., image understanding) was
  also targeted at critical military applications; that didn't prevent
  it from also being useful or even critical to civilian industry.  The
  strategic computing effort need not be different.  All that has changed
  is the military's boldness in expressing its own importance, about which
  it may or may not be right.  -- KIL]

2.  Everyone knows how backward Soviet computer science and industry
are, so who is he talking about when he refers to " 'wars by computer,"
with the side possessing the superior technology prevailing" ?  Once
again, the U.S. leads the way into a new round of the arms race.


/Ron

------------------------------

Date: Fri 6 Apr 84 11:57:07-PST
From: PENTLAND@SRI-AI.ARPA
Subject: Issues in Language, Perception and Cognition

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

**** Due to a scheduling conflict, there has been a room change, to 050 ****

WHO: Len Talmy, Cognitive Science Program and German Dept.,  UC Berkeley
WHAT: How Language Structures its Concepts
WHEN: Monday April 9 12:00 noon
WHERE: Room 380-50

------------------------------

Date: Thu 5 Apr 84 17:02:34-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Automatic Deduction talk

          [Forward from the Stanford bboard by Laws@SRI-AI.]

                Monday, April 9th in MJH 301 at 2:30.


                    THE ORIGIN OF BINARY-SEARCH ALGORITHMS


                               Richard Waldinger
                        Artificial Intelligence Center
                               SRI International


     Many of the most efficient numerical algorithms employ a binary search, in
which  the  number we are looking for belongs to an interval that is divided in
half at each iteration.  We consider how such algorithms might be derived  from
their specifications.

     We follow a deductive approach, in which programming is regarded as a kind
of  theorem  proving.    By  systematic  application  of this approach, several
integer and real-number algorithms for such functions as the  square  root  and
quotient have been derived.  Some of these derivations have been carried out on
an  interactive  program-synthesis  system.    The  programs  we  obtained  are
different from what we originally expected.

------------------------------

End of AIList Digest
********************

∂13-Jan-85  1603	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #45
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Jan 85  16:03:09 PST
Mail-From: LAWS created at 11-Apr-84 16:00:48
Date: Wed 11 Apr 1984 15:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #45
To: AIList@SRI-AI
ReSent-date: Sun 13 Jan 85 16:03:09-PST
ReSent-From: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-To: YM@SU-AI.ARPA


AIList Digest           Thursday, 12 Apr 1984      Volume 2 : Issue 45

Today's Topics:
  AI Tools - Real-Time AI & MASSCOMP & MV68000 Systems Wanted,
  Review - Micro LISPs & Common LISP Alert & CACM Humor,
  AI Jobs - Noncompetition Clauses,
  Expert Systems - Articulation,
  Natural Language - Metaphors,
  Seminars - Automated Algorithm Design & Engineering Problem Solving
----------------------------------------------------------------------

Date: 3 Apr 84 11:46:21-PST (Tue)
From: hplabs!hao!ames-lm!al @ Ucb-Vax
Subject: Real time man-in-the-loop LISP machines
Article-I.D.: ames-lm.196

Does anyone know of any AI systems or LISP machines that are oriented
towards real time, man-in-the-loop simulation?  We are beginning work
on a space station simulator aimed at human factors research.  LISP
is an appealing language in many respects but all of the systems
I've heard of are interactive, non-real time oriented.  We need something
that can pretend that it's a space station and do it fast enough
and consistantly enough to keep up with human temporal perception.

------------------------------

Date: 5 Apr 84 10:04:50-PST (Thu)
From: decvax!mcnc!philabs!rdin!perl @ Ucb-Vax
Subject: OPS5 and Franz LISP wanted
Article-I.D.: rdin.371

We are looking for implemetations of Franz LISP and OPS5 that
will run on a MASSCOMP MC500 under MASSCOMP UNIX version 2.1 (2.0).

Thank you.

Robert Perlberg
Resource Dynamics Inc.
New York
philabs!rdin!rdin2!perl

------------------------------

Date: Mon, 9 Apr 84 16:37:27 cst
From: George R. Cross <cross%lsu.csnet@csnet-relay.arpa>
Subject: LISP on a Data General?


Does anyone know of a LISP implementation on Data General's
MV8000 type computers under AOS/VS?   One good enough for
teaching is all that is required.

       George Cross
       Computer Science, LSU
       <cross%lsu@csnet-relay>

------------------------------

Date: 11 Apr 1984 0206 PST
From: Larry Carroll <LARRY@JPL-VLSI.ARPA>
Reply-to: LARRY@JPL-VLSI.ARPA
Subject: micro LISP review

There's a good article in the April issue of PC Tech Journal
about three micro versions of LISP: IQ LISP, muLISP-82, and
TLC LISP.  It gives a fair amount of implementation detail,
contrasts them, and compares them to their mini and mainframe
cousins.  The author is Bill Wong, who's working on his PhD in
computer science at Rutgers.

At least the article looks pretty good to me, but it's been a
long time since I did any LISP programming.  Anyone feel like
reviewing Wong's review?
                                Larry Carroll
                                Jet Propulsion Lab.
                                   larry@jpl-vlsi

------------------------------

Date: Mon, 9 Apr 84 13:03 EST
From: Tom Martin <TJMartin@MIT-MULTICS.ARPA>
Subject: Could it be COMMON LISP?

A announcement just arrived in the mail from Digital Press:

          COMMON LISP manual,Guy L. Steele, Jr.
          $22.00/May 1,1984/Paperbound/

  --Tom Martin
    Arthur D. Little, Inc.

------------------------------

Date: Wed 11 Apr 84 09:53:20-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: What's Happening to Stuffy Old CACM?

I just can't resist passing along these items from Jon Bentley's
column in the April CACM:

  The CRAY-3 is so fast that it can execute an infinite loop
  in less than two minutes.

  Have you heard how [it] implements a branch instruction?
  It holds the program counter constant and moves the memory.


If you like those, you'll also like the articles beginning on
page 343.  Of particular interest to AIers is "The Chaostron:
An Important Advance in Learning Machines."

                                        -- Ken Laws

------------------------------

Date: Sun, 8 Apr 1984  15:21 EST
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: Non-competition clauses

Recently a student of mine applied for a position with one of the new AI
companies (I'd rather not say which one) and received what he considers
to be a very attractive offer.  Unfortunately, there is one problem that
will probably prevent him from accepting the job: the company requires
him to sign an agreement that if he leaves that company for any reason,
he will not compete with them or work for any competitive business for a
period of three years.  In order to keep this agreement in effect, the
company would have to continue to pay him his salary, minus any money he
makes from other employment or consulting.  Since this company defines
its business as AI and AI tools in a very broad sense, this means that
they could force the former employee to stay completely out of the field
of AI for three whole years if he leaves them -- an eternity in this
field.

I've heard of companies that require you to promise not to use any
proprietary knowledge on behalf of your next employer (or anyone else),
but I've never heard of an agreement like this one.  Since the penalty
for leaving is potentially so high (you get a salary for doing nothing,
but are effectively prohibited from practicing your chosen profession
for a period of time that is long enough for you to go completely
stale), it looks to me like they are trying to make you sign up with
them for life -- at their option, of course.

This company seems to think that this agreement is a perfectly routine
matter and that many companies in AI have similar requirements.  Is this
true?  Is this sort of thing spreading?  Have people out there actually
signed agreements of this sort?  Are they legally enforceable?  Unless I
hear otherwise, I'm going to consider this as an isolated case of
institutional paranoia on the part of this one company, and will steer
my students away from that company in the future.  If everyone is doing
it, that is much more alarming.

  -- Scott Fahlman, CMU  <fahlman at cmu-cs-c.arpa>

------------------------------

Date: Sun 8 Apr 84 18:14:52-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Noncompetition Clauses

This is the first I've heard of the post-employment restrictions
Scott Fahlman mentioned, although I've heard of noncompetition agreements
in other industries.  (I believe Nolan Bushnell, for instance, started
Pizza Time Theaters because he couldn't compete with his own creations
at Atari.  I've also heard of cases in the giant-screen TV and
restaurant businesses, always part of a buy-out agreement.)  The
intention is obviously to stop someone from spinning off his own
company to market an idea developed for the first employer.

Although the clause in question is a strong constraint, I don't see
that it would necessarily bind you to the company for life.  Think of
it as a three-year paid sabatical or other grant.  It has a built-in
disincentive for taking a job in any other field, but is a real
bonanza for someone who wants to spend time taking courses and
catching up on the literature in the AI field.

As a practical matter, I doubt that the employer would exercise his
option unless you intended to compete directly in the same product
you were working on.  It wouldn't make sense to buy you off if you
intended shifting to even a moderately different AI application.

                                        -- Ken Laws

------------------------------

Date: 3 Apr 84 18:04:46-PST (Tue)
From: decvax!cwruecmp!borgia @ Ucb-Vax
Subject: Re: expert system algorithms
Article-I.D.: cwruecmp.1127

Don't we always come back to the same old epistemological question?
What good is any system to a human being unless it is understood
(maybe in parts) by one or more human beings?

An expert system should be an intelligent and articulate student
who learns from several experts. Understanding control structures
is not the critical issue. Well-known and fairly simple inference
mechanisms are available. The key issue is articulating what
knowledge was used and how in solving a problem.

"What good is knowledge when it brings no profit to its bearer"
                   - Teireisias in Oedipus the King, Sophocles

  -- joe borgia

  usenet:  decvax!cwruecmp!borgia
  csnet:   borgia@case
  arpanet: borgia.case@csnet-relay

------------------------------

Date: Thu, 5 Apr 84 15:57:13 est
From: Michael Listman <mike%brandeis.csnet@csnet-relay.arpa>
Subject: metaphors

       I am interested in finding information on the extent of
natural language research and expectations.

       In particular, I would like to find out if any research
has been done on comprehension of metaphors.  I realize that
this would present problems such as what to do upon
encountering a metaphor that one (or a system) has never
before encountered.

      Take as an example,

             "Man is a wolf"

      - although it seems obvious to a human, how does one know
which aspects of wolf to apply to man?

      As another example, how do we know that

             "Man is a Bic pen"

is a bad metaphor?  Do we exhaust all the features of each ( man
and Bic pen) and decide that not enough of them are similar enough
for a reasonable comparison?  This seems plausible, but I could
imagine a situation in a discourse where this or a similar metaphor
would make perfect sense (please don't ask me to).

       I believe that in pursuing research in this direction, we
will eventually attain the knowledge to build a psychologically real
natural language understander, which I believe is the only way we
will ever attain a system that can approximate human comprehension.

       If anyone can point me toward research in this area, or
references, or simply guess as to where research like this will lead
in the near future (or ever) please respond as soon as possible.


                                  --- Michael Listman

------------------------------

Date: 9 Apr 84 09:28:48 EST
From: DSMITH@RUTGERS.ARPA
Subject: Semiautomated Algorithm Design

            [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                 Rutgers' Computer Science Colloquium

                    Semiautomated Algorithm Design

                           Douglas R. Smith

   Algorithm design is viewed as the transformation of a formal  specification
of a problem into an algorithm. We present a formal top-down method for
creating hierarchically structured algorithms. The method works as follows: to
design an algorithm A0 for a problem specification P0, the programmer
conjectures the overall structure S of A0, then uses knowledge of the
structure S to deduce subproblem specifications P1,...,Pn for the
underdetermined parts of S.  Algorithms A1,...,An are then designed for the
subproblems P1,...,Pn and  assembled  (via  the structure S) into an
algorithm for the initial problem P0.  This process results in the
decomposition of the initial problem specification into a hierarchy  of
subproblem  specifications and the composition of a corresponding
hierarchically structured algorithm.  The knowledge used  to  deduce
specifications  for subproblems is obtained by analysis of the particular
structure S and is encoded in a procedure called a design strategy for S.
The  top-down  design process  depends  on  design  strategies  for  many
different kinds of algorithm structures.

     We illustrate this approach by presenting  the  knowledge  needed  to
synthesize  a  class  of  divide and conquer algorithms and by deriving a
quicksort algorithm.  Our examples are drawn from experiments with  an
implemented  algorithm  design  system called CYPRESS.  Current efforts to
systematically acquire design strategies for fundamental classes of
algorithms will be discussed.

DATE:    Thursday, April 12, 1984
TIME:    10:30 a.m.
PLACE:   Hill Center - Room 705
        *  Coffee will be served at 10:00 a.m.

------------------------------

Date: 9 Apr 1984  14:15 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Engineering Problem Solving

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

AI Revolving Seminar
Wednesday, April 11, 4:00pm 8th floor playroom

Jerry Roylance  --  "Programs that Design Circuits"

        People can design circuits; this task -- at least partially --
can be done by computers.  I'll talk about how designers think about
circuits and how to make computers think that way, too.  While the talk
will be directed toward circuit design, that is not the sole intent.
How will building problem solvers in one engineering domain help us
build them in other domains?  What "domain-independent" facts can be
carried across?
        Engineering domains (such as circuit design) are good ones to
teach computers.  They have well defined models that let the machine
verify and debug its designs (thus allowing some chance at creativity).
Engineering domains also have many "standard problems" with cookbook
solutions.  If the computer can be clever about recognizing instances of
these problems and combining them together, it can produce nontrivial
designs.  The quality of a design in an engineering domain is also easy
to assess.
        The circuit design domain is not simple, however.  Hierarchical
expansion of abstract components fails to account for many designs.  The
parts of a design are not independent and that makes it difficult for
the knowledge sources to be modular.  Arithmetic constraints solve some
of these problems; some others can be solved by manipulating mechanism
constraints.
        An important perspective:  when teaching a system a new trick,
find out why the some one thought of that trick in the first place.

------------------------------

End of AIList Digest
********************

∂13-Apr-84  1129	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #46
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Apr 84  11:24:42 PST
Date: Fri 13 Apr 1984 09:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #46
To: AIList@SRI-AI


AIList Digest            Friday, 13 Apr 1984       Volume 2 : Issue 46

Today's Topics:
  Education - LISP Advice Sought,
  AI Jobs - Noncompetition Clauses,
  Natural Language - Metaphor,
  Humor - Smalltalk-52 Seminar
----------------------------------------------------------------------

Date: Wednesday, 11 April 1984 23:08:30 EST
From: Kai-Fu.Lee@cmu-cs-g.arpa
Subject: Comments solicited

I will be teaching a LISP/AI course to a group of Pennsylvania high
school seniors who are talented in science.  (part of the Penn.
Governor's School for the Sciences)  I am interested in hearing from
people with similar experience or ideas for subject/assingments/
projects/text book.

The students will likely be divided into two groups, one with and one
without experience in programming.  The course consists of 17 1-hour
lectures to each group. I am planning to divide the course into 3 parts :
(1) Basic Concepts of C.S. [2 lectures] (2) LISP Programming [6-7
lectures] (3) A.I. [8-9 lectures]. Since the schedule is rather tight,
it is unlikely that I could cover anything in much detail.  There will be
two or three programming assignments.  In addition, students may choose to
do a project in computer science.

Thanks,
Kai-Fu Lee
KFL@CMU-CS-G.ARPA

------------------------------

Date: Thu 12 Apr 84 00:36:19-PST
From: Mabry Tyson <Tyson@SRI-AI.ARPA>
Subject: Re: Noncompetition Clauses

Ken, I think you missed one problem with the noncompetition agreement.
I guess you looked at it like I did that your salary would be in the
same range if you could change jobs so it doesn't hurt too much if you
just continued on at the old salary but took the time off.

Suppose you started to work for company A at salary X.  The next year (or so)
you get a much better offer (say as a manager) from company B at salary 2*X.
Now you are prevented from taking that job (assuming B is in competition with
A).

Another way of looking at that is to suppose that you do something good
in your first few years after school but that your company doesn't
want to give you the raise in salary that is commensurate with your proven
abilities.  Now you can't just say that company B will pay you twice your
current salary.  They'd just laugh at you and say you couldn't go.  It might
be worth 3 years of your old salary to keep you from company B even if you
don't do any work for them.

I see the clause as cutting down on wage wars between companies.  It also
cuts down on the mobility available to employees of that company.  Finally
it probably prevents an employee from starting his own company in that
field.

I also feel that the restriction may have a negative effect on the company
requiring it.  Would you go to a black hole from which no one ever could
get away?  I suppose companies requiring that clause are just going to have
to settle for employees that can't find a better offer.  Would you want to
work with second class people?

------------------------------

Date: 12 Apr 84 08:56:18 PST (Thu)
From: Carl Kaun <ckaun@aids-unix>
Subject: Non-competition clauses and other employment agreements


I think that anyone concerned about employment agreements would do well
to contact a lawyer.  The different states consider various clauses in
employment agreements enforceable to differing degrees.  I seem to remember
reading an article about six months ago that said (broadly) that California,
for example, considered clauses restricting people from continuing their
professional careers to be not generally in the public interest, and such
clauses should be carefully considered as to enforceability for that reason.
This same article (which I am trying to dig up) said something about the state
of California holding that clauses assigning all rights to all ideas, patents,
etc. arising during employment, are enforceable only to the extent that such
ideas, etc. resulted from the employment situation.  Again, one should
really contact a lawyer to get a clear opinion in any given situation.  There
is enough variability in this area to make any general comments suspect.

My personal experience has been that employment agreements and the
employer's approach to them are remarkably uniform in industry.  When I
have questioned companies about these agreements, I am told that they
adopt this fairly standard agreement and hold to it firmly on the advice
of their corporate counsel.  The general idea (from the company point of
view) seems to be to claim all you can now, and work out what you can really
enforce if the situation comes to that.  They take this approach
not because of some malignant motive, but because it has proved the most
prudent course for them to take.

------------------------------

Date: Thu, 12 Apr 84 17:51:28 EST
From: Mark S. Day <mday@BBN-UNIX>
Subject: Re: Employment Agreement

Keeping someone entirely out of a field like AI for three years is an illegal
restraint of trade, I would suggest.  Employment agreements must be
reasonable with respect to time and area limitations to be enforceable, and
I doubt that 3 years is a reasonable time constraint, especially given that it
seems to be sufficiently long to get completely out of touch with the field.
The fact that the company offers to pay you for those three years is
irrelevant.

  --Mark

------------------------------

Date: Thu, 12 Apr 84 07:06:04 cst
From: Peter Chen <chen%lsu.csnet@csnet-relay.arpa>
Subject: Noncompetition Clauses

I think that there are quite a few companies putting on restrictions on
post-employment activities, although most of these companies are usually not as
restrictive as the company mentioned by Scott Fahlman.  I think it
is fair for an employer to ask its employees to avoid future
involvement in direct competition with the company
within a short period of time (say, one year instead
of three years) and in a more narrow subject area (i.e., in the area/topics
the individual is working on, rather than a broad definition of the AI field or
the whole computer field).

If I remembered correctly, when I worked for a large computer manufacturer ten
years ago, I was required to sign an agreement that whatever
ideas or products I might develop in my spare time would belong to the company
even though the ideas/products were not related to computers.  Do you think it
is fair?  Do you think your computer employer has the right of the novel you
write during weekends?  I think this case is much more unfair than asking
the employee not to compete with the company after he/she leaves the company
for more than a year.

As far as I understand, all these agreements/contracts are legally binding if
the contracts are signed under free will.  Therefore, they can be enforced if
the companies choose to do so.  However, most of time the companies just use
them as a possible protection for their interest.

      Peter Chen
      Computer Science Dept., LSU
      <chen%lsu@csnet-relay>

------------------------------

Date: Wed, 11 Apr 84 17:45:10 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Metaphor

The thing about a metaphor is that it contains little explicit information.
It acts as a trigger in such a way that the hearer creates meaning for it.
Different hearers create different meanings.  For example, one hearer,
drawing from his background as an environmentalist might take "Man is a Wolf"
to mean that man has a wild, misunderstood soul while another hearer, drawing
from his background as a mountain man who has had to compete with wolves might
take the metaphor to mean that he himself is a savage beast that will kill,
if necessary, to live.

It becomes pretty far-fetched to make up a model of "metaphor" that says
that this information is contained in the statement of the metaphor.

  --Charlie

------------------------------

Date: 11 Apr 84 2255 EST (Wednesday)
From: Steven.Minton@CMU-CS-A.ARPA
Subject: Metaphor comprehension pointers


The following references might prove helpful if you're interested in
AI and metaphor comprehension:

    Carbonell, J.G. and Minton, S. "Metaphor and Common-Sense Reasoning"
    CMU tech report CMU-CS-83-110, March 83

    Carbonell, J.G. "Metaphor: An Inescapable Phenomenon in Natural Language
    Processing", in Strategies for Natural Language Processing, W. Lehnert
    and M. Ringle (eds.), Erlbaum 1982

    Carbonell, J.G. "Invariance Heirarchies in Metaphor Interpretation"
    Proceedings of the 3rd Meeting of the Cognitive Science Society, 1981

There's a large body of literature on analogical reasoning and other
aspects of metaphor comprehension. Much of the relevant research
has been done within psychology and linguistics. I'd suggest looking at
these for an overview:

    Ortony, A. (Ed.) "Metaphor and Thought" Cambridge Univ. Press 1979

    Lakoff, G. and Johnson, M. "Metaphors We Live By" Chicago Univ. Press 1980

    Gentner D. "Structure-Mapping: A Theoretical Framework for Analogy" in
    Cognitive Science, Vol. 7, No.2 1983

    Winston P. "Learning by Creating and Justifying Transfer Frames"  in
    Arificial Intelligence, Vol. 10, No. 2, 1978

I don't know of any natural language system which can handle a wide range
of novel metaphors, and I don't expect to see one soon.
Any such system would have to contain an enormous amount of
knowledge. Unlike most present-day NL systems, a robust metaphor comprehension
system would have to be able to understand many different domains.

In spite of this difficulty, metaphor comprehension remains a fertile
area for AI research. I've spent some time examining how people
understand sentences like: "The US/Russian arms negations are a high
stakes poker game". When you get right down to it, its amazing that
people can figure out exactly what the mapping between "arms negotiatations"
and "poker games" is. What's most amazing is that using and understanding
metaphors APPEARS to take so little effort. (In fact, they are often the
easiest way to to rapidly communicate complex technical information. The next
time you are at a talk, try counting the analogies and metaphors used.)

                                        -- Steve Minton, CMU

------------------------------

Date: Thu, 12 Apr 1984 11:31:13 EST
From: Danger, Will Robinson, Danger! <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Metaphoric Comparisons

Mike:
     Anthropologists and folklorists have been dealing with metaphor (and
related tropes) for a long time, in terms of their use in such common forms of
speech as proverbs and riddles, both of which depend almost totally on the
use of metaphoric and metonymic comparison.  One thing that's critical is the
recognition that use of metaphor is extremely context-dependent; i.e., you
cannot apply Chomskian assumptions that competence is important, because the
problem occurs in performance, which Chomsky relegates to a side issue.
     I'd suggest the following references for a start:

1.  Sapir and Crocker, eds., "The Social Use of Metaphor" -- an excellent
anthology, about eight years old, covering a great deal of ground.
2.  The special issue of the Journal of American Folklore from the early or
mid-seventies on Riddles and Riddling.
3.  Dell Hymes, "Foundations of Sociolinguistics".  (A really critical book
which set the stage for many anthropologists, linguists, etc. to shift over
from competence to performance; its biggest flaw is Hymes' insistence that
communication doesn't exist without intention on the part of at least one of
the performer(s), the receiver(s), and the audience.)
4.  The journal "Proverbium", which was, for its 25-year life, THE place to
look for research on proverbs and related stuff.  Especially good are articles
by Nigel Barley, Alan Dundes, and Barbara Kirschenblatt-Gimblett, whose "The
Proverb in Context" is a real key article.
5.  Kirschenblatt-Gimblett and Sutton-Smith, eds., "Speech Play".  A very good
anthology about uses of all sorts of special speech techniques, including
metaphorical comparisons, in various cultures.

Those are the ones I can remember off the top of my head.  There are lots
more stored in my bibliography hard-copy file at home, and you can drop me
a net-note if you need 'em...

  --Dave Axler

------------------------------

Date: 12 Apr 1984 09:49:50-EST
From: walter at mit-htvax
Subject: Smalltalk-52

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                 ANNALS OF COMPUTER SCIENCE SEMINAR
                   DATE:  Friday, April 13th, 1984
                   TIME:  Refreshments  12:00 noon
                  PLACE:  MIT AI Lab 8th Floor Playroom

                 SMALLTALK-52 and the Wheeler Send

                               ABSTRACT

        Recently discovered paper tapes reveal that J.M. Wheeler
        designed the first version of Smalltalk in 1952,
        intending it to run on the University of Cambridge's
        EDSAC Computer.  The initial implementation, however,
        required the machine's entire 512-word memory and was deemed
        infeasible.  Wheeler, who is credited with the invention
        of bootstrap code, subroutine calls, assemblers, linkers,
        loaders, and all-night hacking, can now be properly
        credited with inventing message passing, object oriented
        programming, window systems, and impractical languages.

        This fascinating historical discussion and the accompanying
        Graduate Student Lunch will be hosted by Steve Berlin.

        Next Week:
        Lady Lovelace's Public-Key Encryption Algorithm.

------------------------------

End of AIList Digest
********************

∂15-Apr-84  1824	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #47
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Apr 84  18:23:58 PST
Date: Sun 15 Apr 1984 17:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #47
To: AIList@SRI-AI


AIList Digest            Sunday, 15 Apr 1984       Volume 2 : Issue 47

Today's Topics:
  Education - Request for Writing, Geometry Systems,
  Msc. - TAMITTC,
  Applications - Artificial Big Brother,
  Seminars - Qualitative Process Theory & AI and VLSI,
  Conferences - COLING84 Information
----------------------------------------------------------------------

Date: Sat, 14 Apr 84 02:06 EST
From: Malcolm Cook <cook%umass-cs.csnet@csnet-relay.arpa>
Subject: TEACHING SYSTEMS. WRITING & GEOMETRY.

Im looking for information on 2 things:
        1) what systems exist that are used to teach language skills,
especially teaching writing to children.  also, I remember hearing
about studies showing that children are motivated to write simply
by having a simplified editor.  Any pointers to this study(s)?
There was an interesting article in N.Y. times sunday mag section
2-3 months back entitles "Writing to Read", and was about a curriculum
for 1st & 2nd grade, in which the children were learning a simple
morpheme<==>grapheme map, allowing them to pphonetically spell
any word they could pronounce.  Are there any AI systems involved
in this course?

        2) What does there exist for tutoring sytems in geometry.
I am somewhat familiar with the CMU approach (Boyle, Anderson, Shrager...)
but what else is around?  Anything on the spectrum from GOOD
programmed instruction to /reactive environments/ would be of interest.

thanks,

        Malcolm Cook (Cook.umass-cs@csnet-relay)

------------------------------

Date: 7 Apr 84 17:39:57-PST (Sat)
From: decvax!cwruecmp!borgia @ Ucb-Vax
Subject: TAMITTC
Article-I.D.: cwruecmp.1135

A few weeks ago a cryptic TAMITTC poster appeared on all the doors
of our computer engineering department. Recently I discovered what
it was all about on an obscure campus billboard.

        There Are More Important Things Than Computers

And there was a footnote.

        Like what? People? Oh, those things!

------------------------------

Date: 8 Apr 84 15:53:05-PST (Sun)
From: harpo!ulysses!allegra!don @ Ucb-Vax
Subject: Artificial Big Brother
Article-I.D.: allegra.2388

                AI and criminology

I totally agree with DW's article in net.crypt.  Computer techniques
would be used to keep track of "political criminals".  Middle class
intellectuals are far more vulnerable to this sort of control than are
street criminals and drifters.

Already, right wing organizations use this technology to keep track of
people they consider politically dangerous, and while the government is
not allowed to do this, they have received information from these
organizations under the table.  In some cases, victims are chosen
simply by correlating magazine subscription information.

------------------------------

Date: 9 Apr 84 10:05:30-PST (Mon)
From: hplabs!hao!seismo!brl-vgr!abc @ Ucb-Vax
Subject: Re: Artificial Big Brother
Article-I.D.: brl-vgr.15

But just like so many things, do you really think that,
because the "good guys" don't build a tool which can be
used for "good" or "evil" that the "bad guys" won't build
and use the tool?

It seems that what is needed is research into methods for
controlling these tools (Computer Science) and research
into new public policies regarding the use and misuse of
such tools (Humanities and Social Science).

Remember: whether the U.S. did it or not, others still
would have developed and deployed nuclear weapons.

------------------------------

Date: 10 Apr 84 13:51:13-PST (Tue)
From: hplabs!tektronix!tekigm!dand @ Ucb-Vax
Subject: Re: Artificial Big Brother
Article-I.D.: tekigm.74

I cannot agree too strongly with Brint Cooper about this. The tool never makes
the wielder any more or less an "evil" person.

If given a choice, I'd rather have such a tool built by the established AI
community for two reasons:
1) The program's existance is published, so people can think about the
   implications and possibly set up systems to reduce the amount of abuse
   the system is used for. As a possible victim of misuse, I can also start
   thinking about preventive measures to unreasonable privacy invasions(I
   personally believe no one even now has any real privacy if someone is out
   to do you in, but that is not germain here.)
2) If such a tool is in the public domain, at least the people it was
   originally designed for, the law-enforcement agencies, would get some
   use out of it. If this tracking system were to be built in a CIA shop or
   an NSA shop, no one outside those organizations will ever know of its
   existance, and thus never be able to use it.

Abuses with such a system are going to be inevitable; the goal for us to set
is to see that the abuses are kept to a minimum, which we can't do if the
system requires "Top Secret/Burn Before Reading" clearance to even know that it
exists.

Lest anyone try to say that the possible abuses of such a system outweigh the
few benefits of it, remember that Theodore Bundy was convincted with such
evidence as gasoline receipts in the area where one of his victims disappeared,
at the same time she disappeared. With such a system, perhaps, Ted Bundy would
not have racked up the score of dead, young women that he did. Such a system
might help pinpoint the current Green River Killer in Washington State,
or reduce the predations of the itinerant killers who prey on anyone they
think they can get away with. If some shadowy bureaucrat were out to get you,
this system would not be necessary--a judge's signature is all that is needed
to open up the records of your Visa, your bank, your employer, etc (granted it
may not be a legal action on that part of that judge, but we're already talking
about illegal activities, no?).

Finally, if this discussion is going to go on, let's move it to net.politics
or net.misc or net.legal, net.ai is not the proper forum for this discussion.

Dan C Duval
ISI Engineering
Tektronix,Inc

tektronix!tekigm!dand

------------------------------

Date: 12 Apr 1984  16:16 EST (Thu)
From: Cobb%MIT-OZ@MIT-MC.ARPA
Subject: Qualitative Process Theory

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                               SEMINAR

                          Kenneth D. Forbus


                          April 17 - 4:00PM
                      NE43 - 8th floor Playroom

                  Title:  QUALITATIVE PROCESS THEORY


        Objects move, collide, flow, bend, heat up, cool down,
stretch, break, and boil.  These and other things that happen to cause
changes in objects over time are intuitively characterized as
processes.  To understand common sense physical reasoning and make
programs that interact with the physical world as well as people do we
must understand qualitative reasoning about processes and their
effects.  Qualitative Process theory defines a simple notion of
physical process that appears useful as a language in which to write
dynamical theories.  Reasoning about processes also motivates a new
qualitative representation for quantity in terms of inequalities,
called the Quantity Space.

        This talk will describe the basic concepts of Qualitative
Process theory, two different kinds of reasoning that can be performed
with them, and its implications for causal reasoning.  Several
examples will be presented to illustrate the utility of the theory,
including figuring out that a boiler can blow up and how different
theories of motion may be encoded.


Refreshments at 3:45PM

Host:  Professor Patrick H. Winston

------------------------------

Date: 12 Apr 84 16:31:46 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: III Seminar on AI and VLSI this Coming Thursday (room 423)...

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                                 I I I SEMINAR

          Title:    Knowledge-Based Aids for VLSI Design
          Speaker:  Tom Mitchell
          Date:     Thursday, April 19, 1984, 1:30-2:30 PM
          Location: Hill Center, ***room 423***

       Knowledge-based  systems  provide  one  possible approach to dealing
    with the complexities of VLSI design.  This talk discusses  the  design
    of  such a system, called VEXED, which aids the user in the interactive
    design of VLSI circuits.  VEXED  is  intended  to  provide  suggestions
    regarding  alternative  implementations  of circuit modules, as well as
    warnings regarding conflicting constraints during design.   The  system
    is composed of a circuit network expert (CIRED), a layout expert (RPL),
    and a digital signal analysis expert (CRITTER).  A prototype version of
    VEXED  has  been implemented, and a second version of the system is now
    under development.

------------------------------

Date: Thu 12 Apr 84 11:10:48-PST
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: COLING84 information on registration, travel, housing,
         summer school

****************** PLEASE POST, CIRCULATE, AND REDISTRIBUTE *****************

   COLING84, TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS

COLING84 is scheduled for 2-6 July 1984 at Stanford University, Stanford,
California.  It will also constitute the 22nd Annual Meeting of the
Association for Computational Linguistics, which will host the conference.

Information about the conference, registration, travel, and accommodations,
and about the six summer school courses that will be held during the
preceding week (25-29 June) has just been made available in the form of The
COLING Courier.  For a copy, contact Don Walker, COLING84, SRI
International, Menlo Park, California, 94025, USA [phone: (415)859-3071;
arpanet: walker@sri-ai; telex [650] 334486].  Other requests for information
about the conference should be addressed to Martin Kay, COLING84, Xerox PARC,
3333 Coyote Hill Road, Palo Alto, California 94304, USA [phone:
1-(415)494-4428; arpanet: kay@xerox; telex [650] 1715596].

The summer school, which will be held 25-29 June, consists of week-long
tutorials on six subjects that are central to computational linguistics but
on which instruction is still not routinely available:  LISP AS
LANGUAGE--Brian Smith, Xerox and Stanford; PROLOG FOR NATURAL LANGUAGE
ANALYSIS--Fernando Pereira, SRI International; PARSER CONSTRUCTION
TECHNIQUES--Henry Thompson, Edinburgh; SITUATION SEMANTICS--David Israel,
BBN, & John Perry, Stanford; MACHINE TRANSLATION--Brian Harris, Ottawa, &
Alan Melby, Brigham Young; SOUND STRUCTURE OF LANGUAGE--Mark Liberman, Bell
Labs.  Enrollments are limited to 30 in each tutorial, so register early.

A remarkably rich set of computational facilities will be available at
Coling84 for demonstrating programs and systems.  For information, contact
Doug Appelt, SRI International, Menlo Park, California 94025 [phone: (415)
859-6150; arpanet: appelt@sri-ai; telex: [650] 334486].

You are advised to BOOK EARLY FOR COLING84, since airline reservations will
be much harder than usual to obtain.  Custom Travel Consultants, 2105
Woodside Road, Woodside, CA 94062 [phone (415)369-2105], is responsible for
registration, travel, and housing.  Full information is provided in the
Coling Courier, but call them if time is short.

------------------------------

End of AIList Digest
********************

∂16-Apr-84  1106	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #48
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Apr 84  11:05:48 PST
Date: Mon 16 Apr 1984 09:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #48
To: AIList@SRI-AI


AIList Digest            Monday, 16 Apr 1984       Volume 2 : Issue 48

Today's Topics:
  Applications - Business AI Request,
  Natural Language - Metaphor References,
  AI Literature - Book Prices,
  AI Jobs - Noncompetition Agreements,
  AI Computing - Discussion
----------------------------------------------------------------------

Date: Mon, 16 Apr 84 00:47:17 pst
From: syming%B.CC@Berkeley
Subject: AI for Business?

I am soliciting any information on the application of artificial intelligence
and/or expert system techniques on the area of business administration such
as marketing, finance, production/operation, accounting, organizational
behavior ... etc. Any information (e.g., on-going project, current reseach,
new idea, trends ...) are greatly appreciated.

Syming Hwang
School of Business Administration, U.C. Berkeley, 415-642-2070 (ofc)
350 Barrows Hall, U.C. Berkeley, Bekerley, CA 94720
syming%B.CC@Berkeley.ARPA

------------------------------

Date: Mon 16 Apr 84 02:21:16-EST
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Poetics Today & Metaphor

   The current issue of *Poetics Today* (V.4, N.2, 1983) is
specially dedicated to the subject of metaphor, and contains four
weighty articles by Umberto Eco, Eddy M. Zemach, Inez Hedges, and
Jan Wojcik. The article by Eco (who is considered by many to be
the foremost living literary theorist and semiotician in the
world) is especially useful.
   Eco provides a glimpse of just how vast is the literature on
metaphor:
   "The 'most luminous, and therefore the most necessary and
frequent' (Vico) of all tropes, the metaphor, defies every
encyclopedic entry. Above all because it has been the object of
philosophical, linguistic, aesthetic and psychological reflection
since the beginning of time. Shibles's 1971 bibliography on the
metaphor records around 3000 titles; and yet, even before 1971,
it overlooks authors like Fontanier, and almost all of Heidegger
and Greimas--and of course does not mention, after the research
in componential semantics, the successive studies on the logic of
natural languages, the work of Henry, Group u of Lieges, Ricoeur,
Samuel Levin, and the latest text linguistics and pragmatics."
   Eco makes some remarks on the subject of metaphor which are
highly pertinent to AI researchers:
   "No algorithm exists for metaphor, nor can a metaphor be
produced by means of a computer's precise instructions, no matter
what the volume of organized information to be fed in. The
success of a metaphor is a function of the sociocultural format
of the interpreting subjects' encyclopedia. In this perspective,
metaphors are produced solely on the basis of a rich cultural
framework, on the basis, that is, of a universe of content that
is already organized into networks of interpretants, which decide
(semiotically) upon the identities and differences of properties.
At the same time this content universe, whose format postulates
itself not as rigidly hierarchized, but rather according to Model
Q, alone derives from the metaphorical production and
interpretation the opportunity to restructure itself into new
modes of similarity and dissimilarity."
   The journal *Poetics Today* is a rich source of speculation
and analysis for anyone exploring the more subtle structures and
processes of natural language understanding.

     --Wayne McGuire <mdc.wayne@MIT-OZ>

------------------------------

Date: Fri, 13 Apr 84 16:14:56 PST
From: Koenraad Lecot <koen@UCLA-CS.ARPA>
Subject: Synapse Books Prices

I remember a message on the AIList that mentioned Synapse Books as
a [...] publisher of AI books. Have you seen their 1984 catalog ?
It contains two new "books" by a certain R.K. Miller, one at $200 and
the other at $485 ...
I knew that the prices of AI books were going up but this is crazy ..


[I remember an ad for a reprint of key expert systems papers for over
$1000 a year or two ago.  This wasn't a Comtex microfiche collection
(about $2000 per set), just a reprint compendium marketed for corporations
and Wall Street types.  -- KIL]

------------------------------

Date: 14 Apr 1984 11:59-PST
From: fc%USC-CSE@USC-ECL.ARPA
Subject: Noncompetition Agreements

        I don't know about you, but whenever I am given a contract to
sign, I simply cross out anything I'm not willing to agree to and sign
what remains. If they want me, they sign, if they don't they don't. In
my experience, 95% of the time, they just sign and take what they get.
The other 5% of the time, they try to bargain, and I simply refuse to
yield on the issues that are important to me. At that point we either
agree or don't. The point is, that you should only agree to the things
that seem reasonable to you, and then only if you understand the legal
ramifications of what you are signing.

        Frankly, I wouldn't work for anyone who felt the need to bind me
to them by an exclusive use of my brain contract. First of all, it's my
brain not theirs. Second of all, they must be in pretty bad stead with
their employees if they have to use the law to force them to stay.
Companies that are really good don't have to force employees to stay,
the employees stay because they believe in the company and they get the
rewards they seek. Figure out what you want and what you're willing to
give for it, don't do what you don't believe in just because others are
doing it.
                                        Fred

------------------------------

Date: 13 Apr 84 16:33:52-EST (Fri)
From: Brian Nixon <nixon%toronto.csnet@csnet-relay.arpa>
Subject: Non-competition clauses

At least in Canada, the courts usually take a low view of such clauses
in employment contracts, UNLESS they are severely restricted in scope, e.g.
are for a period of less than 6 months, apply only to taking a job within
the same city, apply only to taking a job within a particular industry.

Brian Nixon, Dept. of Computer Science, Univ. of Toronto.

------------------------------

Date: 15 April 1984 17:53-EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Non-competition clauses

An excellent article on ''covenants not to compete'' and other non-
disclosure agreements is Davidson, ''Constructing OEM Nondisclosure
Agreements'', 24 Jurimetrics Journal 127 (1984).  The author notes that
after-employment restrictions are strong medicine, and therefore they
are narrowly construed as to time and subject matter.  In some states
(e.g., California) they are impermissible except in narrow
circumstances (such as the sale of a business and the like).  Likely
the best policy is to consult a lawyer.

If you really wish to steer students away from that company, I would
think the best way would be to name names.  Their employment terms are
hardly a secret in themselves.

  -- Steve

------------------------------

Date: Sun, 15 Apr 1984  10:19 EST
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: Employment restrictions

Of the responses I've received, both by mail and in person, several have
said that a three-year paid sabbatical wouldn't be so bad (but you would
be prevented from starting a company or doing anything that involves
significant machine resources or a team of people), several have
said that the clause probably wouldn't stand up in court (but it's no
fun to fight a big company in court), and many have said that this policy
would keep lots of good people away from the company in question.

Nobody has told me about any similar clause used by another company.
Some consulting firms require their employees to agree not to jump over
to work for the clients for a year or so, but that still leaves them with
lots of options within the field of AI.

In any event, the company that started all this is now reconsidering
their position and are trying to find some less restrictive way to
protect their proprietary information, so the whole issue may soon be
moot.  It's nice to find a company where the lawyers still work for the
researchers, and not vice versa.

  -- Scott Fahlman

------------------------------

Date: 9 Apr 84 22:55:52-PST (Mon)
From: hplabs!hao!cires!nbires!opus!rcd @ Ucb-Vax
Subject: Re: Stolfo's call for discussion
Article-I.D.: opus.346

>One way AI programming is different from much of the programming in other
>fields is that for AI it is often impossible to produce a complete set of
>specifications before beginning to code.
>
>The accepted wisdom of software engineering is that one should have a
>complete, final set of specifications for a program before writing a
>single line of code.  It is recognized that this is an ideal, not
>typical reality, since often it is only during coding that one finds
>the last bugs in the specs.  However, it is held up as a goal to
>be approached as closely as possible.

I submit that these statements are NOT correct in general for non-AI
programs.  Systems whose implementations are not preceded by complete
specifications include those which
        - involve new hardware whose actual capability (e.g., speed) is
          uncertain.
        - are designed with sufficiently new hardware and/or are to be
          manufactured in sufficient quantity that hardware price per-
          formance tradeoffs will change significantly in the course of the
          development.
        - require user-interface decisions for which no existing body of
          knowledge exists (or is adequate) - thus the user interface is
          strongly prototype (read: trial/error) oriented.
as well as the generally-understood characteristics of AI programs.  In
some sense, my criteria are equivalent to "systems which don't represent
problems already solved in some way already" - and there are a lot of such
problems.
                                  --
"A friend of the devil is a friend of mine."            Dick Dunn
{hao,ucbvax,allegra}!nbires!rcd                         (303) 444-5710 x3086

------------------------------

Date: 11 Apr 84 20:13:01-PST (Wed)
From: hplabs!tektronix!uw-beaver!teltone!warren @ Ucb-Vax
Subject: Re: Stolfo's call for discussion
Article-I.D.: teltone.252

Unexpectedness and lack of pre-specification occur in many professional
programming environments.  In AI particulary it occurs because
experimentation reveals unexpected results, as in all science.

In hardware (device-driver) code it occurs because the specs lie or
omit important details, or because you make an "alternative
interpretation".

In over-organized environments, where all the details are spelled out
to the nth degree in a stack of documents 8 feet high, unexpectedness
comes when you read the spec and discover the author was a complete idiot
having a very bad day.  I have seen alleged specs that were signed off
by all kinds of high mucky-mucks that are completely, totally, zonkers.
Not just in error, but complete jibberish, having no visible association
with either reality or thought, not to mention the project at hand.
At the very least, they are simply out of date.  Something crucial has
changed since the specs were written.

In business environments, it occurs when the president of the company
says he just changed the way records are to be kept, and besides, doesn't
like the looks of the reports he agreed to several months ago.  Whats a
programmer to do ?  Tell the boss to shove it ?  The single most difficult
kind of programming occurs when  1) The user is your boss (or "has power").
2) The user is fairly stupid.  3) The user/boss is good enough of a con
artist to prevent the programmer from leaving.  It is admitted, however,
that the difficulty is not technical, per se, but political.

All the above examples are from my professional experience, which spans
over ten years.  None of the situations are very unusual.  Unexpectedness
is part of our job.  In any case, 90 to 99% of the code in the AI systems
I've seen are much like any other program.  There are parsers, allocators,
symbol tables, error messages, and so on.  I'll let others testify to
the remainder of the code, its been a while.

                                warren

------------------------------

End of AIList Digest
********************

∂19-Apr-84  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #49
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Apr 84  18:09:02 PST
Date: Thu 19 Apr 1984 16:47-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #49
To: AIList@SRI-AI


AIList Digest            Friday, 20 Apr 1984       Volume 2 : Issue 49

Today's Topics:
  AI Programming - Cfasling Pascal routines into Franz,
  AI Tools - Prolog on Symbolics Machines,
  Expert Systems - Real-Time Simulation,
  Jobs - Noncompetition Agreements,
  AI Programming - Discussion,
  Linguistics - Use of "and",
  Seminar - Puzzles and Permutation Groups,
  Conference - Expert Database Systems Workshop
----------------------------------------------------------------------

Date: 12 Apr 1984 07:52:11-EST
From: Nasa.Langley.Research.Center@CMU-RI-ROVER
Subject: Cfasling Pascal routines into Franz

  In our work with distributed intelligent systems for space teleoperators
and robotics, we have found the "cfasl" function in Franz Lisp to be very
useful, connecting previously-developed Fortran routines to the total
system. However, a need has arisen to use an external Pascal function,
and we have been unable to persuade Franz to accept this in our system.
We have even tried to front-end Pascal modules with Fortran in order to
cfasl, but can't even manage that. Any suggestions from someone who has
done this? We are running Franz opus 38.17 on a *VAX/VMS* 750, not Unix.
We do have the Eunice system.

The heart of the problem is that
I'm using VAX/VMS Pascal, not the Unix/Eunice pascal. Evidently the VMS
pascal is not generating global symbols in a way that is visible to the
cfasling functions. In fact, I haven't gotten it to be visible to the linker
in VMS calling pascal modules from other languages, say fortran. Probably
if I could get that, I could get the other.

Mailing address is:
      Nancy Orlando
      Mail Stop 152D
      NASA Langley Research Center
      Hampton, VA 23665
Thanks in advance...

Nancy Orlando

------------------------------

Date: Mon 16 Apr 84 16:56:05-CST
From: Oliver Gajek <LRC.Gajek@UTEXAS-20.ARPA>
Subject: Prolog on Symbolics machines

Does anyone know  whether there is  a PROLOG available  for a  Symbolics
Lisp machine?  If so, can you  run it simultaneously with Lisp and  call
it from there? And how does it compare to other implementations?

Thanks,

Oliver.

------------------------------

Date: 17 Apr 1984 21:27:06 EST
From: Perry W. Thorndyke <THORNDYKE@USC-ISI>
Subject: Real-Time Simulation

Response to request for information on AI-based real-time simulation:

We at Perceptronics are developing a real-time simulation of a Navy
tactical decision-making environment for use in an instructional system.
The environment simulates an air-sea battle situation in which the student
must command a ship, utilizing sensors, weapons, maneuvering, and deception
to defend himself against an opposing ship(s).  The battle simulation and
opponent simulation must run in real time to present a realistic training
situation. From an instructional perspective, the interesting research
issues involve (1) how to represent the skills associated with real-time
cognition on a time-stressed problem, and (2) how to make the opponent
simulation modifiable under program control by the instructional system
so that exercises can address particular pedagogical objectives.  We are
currently working in GLISP, which sits on top of Franz Lisp on a VAX.
We utilize 4 mb of main memory.

Perry Thorndyke
Perceptronics Knowledge Systems Branch
thorndyke@usc-isi

------------------------------

Date: 17 Apr 1984 21:59:32 EST
From: Perry W. Thorndyke <THORNDYKE@USC-ISI>
Subject: Noncompetition

Response to Fahlman's message on noncompetition clauses:

Scott,

We continually hire AI talent into our for-profit, public company to
conduct R&D on expert systems, surrogate instructors, intelligent human-
machine interfaces, and distributed AI.  Several of our products contain
proprietary hardware-software designs and our market advantage depends
on maintaining a technology edge in those product areas, which include
videodic/graphics display systems.  Yet we have no such noncompetition
clause, nor have we considered imposing one.  Given that it is a
seller's market for AI talent now, it's hard to believe that any company
could get away with imposing such a policy--assuming that it is even
legally enforcable. My experience in the AI field is that conflict-of-
interest considerations do not extend beyond the term of employment
of the individual, except for non-disclosure of proprietary information.
The policy you cited seems extreme and undesirable, and constitutes
a moral, if not legal, unfair restraint of trade.

Perry Thorndyke
Perceptronics, Inc.
thorndyke@usc-isi

------------------------------

Date: Wed 18 Apr 84 11:22:44-PST
From: WYLAND@SRI-KL.ARPA
Subject: Stolfo's call for discussion

        Your question - "What are the fundamental characteristics
of AI computation that distinguish it from more conventional
computation." - is a good one.  It is answered, consciously or
unconsciously, by each of us as we organize our understanding of
the field.  My own answer is as follows:

        The fundamental difference between conventional programs
and AI programs is that conventional programs are static in
concept and AI programs are adaptaive in concept.  Conventional
programs, once installed, have fixed functions: they do not
change with time.  AI programs are adaptive: their functions and
performance improve with time.

        A conventional program - such as a payroll program, word
processor, etc. - is conceived of as a static machine with a
fixed set of functions, like a washing machine.  A payroll
program is a kind of "cam" that converts the computer into a
specific accounting machine.  The punched cards containing the
week's payroll data are fed into one side of the machine, and
checks and reports come out the other side, week after week.  In
this concept, the program is designed in the same manner as any
other machine: it is specified, designed, built, tested, and
installed.  Periodic engineering changes may be made, but in the
same manner as any other machine: primarily to correct problems.

        AI programs are adaptive: the program is not a machine
with a fixed set of functions, but an adaptive system that grows
in performance and functionality.  This focus of AI can be seen
by examining the topics covered in a typical AI text, such as
"Artificial Intellegence" by Elaine Rich, McGraw-Hill, 1983.
The topics include:

  o  Problem solving: programs that solve problems.
  o  Game playing
  o  Knowledge representation and manipulation
  o  Natural language understanding
  o  Perception
  o  Learning

        These topics are concerned with adaptation, learning, or
any of several names for the same general concept.  This seems to
be the consistant characteristic of AI programs.  The interesting
AI program is one that can improves its performance - at solving
problems, playing games, absorbing and responding to questions
about knowledge, etc. - or one that addresses issues associated
with problem solving, learning, etc.

        The adaptive aspect of AI programs implies some
difference in methods used in the programs.  AI programs are
designed for change, both by themselves while running, and by the
original programmer.  As the program runs, knowledge structures
may expand and change in a number of dimensions, and the
algorithms that manipulate them may also expand - and change
THEIR structures.  The program must be designed to accommodate
this change.  This is one of the reasons that LISP is popular in
AI work: EVERYTHING is dynamically allocated and modifyable -
data structures, data types, algorithms, etc.

        Good luck in your endeavors!  It is a great field!

Dave Wyland
WYLAND@SRI

------------------------------

Date: 12 Apr 84 15:51:48-PST (Thu)
From: harpo!ulysses!burl!clyde!akgua!psuvax!burdvax!sjuvax!bbanerje @
      Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: sjuvax.254

>> There is another way of looking at the statement -
>>  all customers in Indiana and Ohio
>> which seems simpler to me than producing the new phrase -
>>  all customers in Indiana  AND all customers in Ohio
>> instead of doing this why not treat Indiana and Ohio as a new single
>> conceptual entity giving -
>>  all customers in (Indiana and Ohio).
>>
>> This seems simpler to me. It would mean the database would have to
>> allow aggregations of this type, but I don't see that as being
>> particularly problematic.
>>
>> Jim Cowie.

My admittedly inconsequential contribution to this:

(Pardon the Notation!  Here, Indiana and Ohio correspond to sets
of base type customer.  Cλ- denotes set membership and (~) is
intended to denote set intersection.)


All customers in Indiana AND all customers in Ohio seems to want the
following :

    [all customers such that |
        {customer Cλ- Indiana} XOR {customer Cλ- Ohio}]

This seems to be described best as

    [all customers such that |
        customer Cλ- {Indiana U Ohio - (Indiana (~) Ohio)}]

Assuming that no customer can be in Indiana and Ohio simultaneously,
the intersection of the sets would be NULL.  Thus we would have

    [all customers such that |
        customer Cλ- {Indiana U Ohio}]

So far so good.  However, the normal sense of an AND as I understand
it, corresponds to a set intersection.  The formulation is therefore
counter-intutive.

I'm not an AI type, so I would appreciate being set straight.  Flames
will be cheerfully ignored.

Regards,


                                Binayak Banerjee
                {allegra | astrovax | bpa | burdvax}!sjuvax!bbanerje

------------------------------

Date: 18 April 1984 15:27-EST
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: puzzles and permutation groups

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

DATE:     Thursday, April 19, 1984
TIME:     Lecture, 4:00pm
PLACE:    NE43-512a


   ``Generalized `15-puzzles' and the Diameter of Permutation Groups''

                        Dan Kornhauser
                              MIT

Sam Lloyd's famous ``15-puzzle'' involves 15 numbered unit squares free to move
in a 4x4 area with one unit square blank.  The problem is to decide whether a
given rearrangement of the squares is possible, and to find the shortest
sequence of moves to obtain the rearrangement when it is possible.

A natural generalization of this puzzle involves a graph with @i(n) vertices,
and @i(k<n) tokens numbered @i(1,...,k) on distinct vertices.  A legal move
consists of sliding a token from its vertex to an adjacent unoccupied vertex.

Wilson (1974) obtained a criterion for solvability for biconnected graphs and
@i(k=n-1).  No polynomial upper bound on number of moves was given.

We present a quadratic time algorithm for deciding solvability of the general
graph problem.  It is also shown that @i[O(n@+{3})] move solutions always exist
and can be efficiently planned.  Further, @i[O(n@+{3})] is shown to be a
matching lower bound for some graph puzzles.

We consider related puzzles of the Rubik's cube type, in the context of the
general permutation group diameter question.

This is joint work with Gary Miller, MIT, and Paul Spirakis, NYU

HOST:   Professor Silvio Micali

------------------------------

Date: 12 Apr 84 12:31:31-PST (Thu)
From: harpo!ulysses!allegra!carlo @ Ucb-Vax
Subject: Expert Database Systems Workshop  (long msg)
Article-I.D.: allegra.2406

               Call for Papers and Participation

       FIRST INTERNATIONAL WORKSHOP ON EXPERT DATABASE SYSTEMS

         October 25-27, 1984, Kiawah Island, South Carolina


  Sponsored by

  The Institute of Information Management, Technology, and Policy,
  College of Business Administration,
  University of South Carolina

  In Cooperation With

  Association for Computing Machinery - SIGMOD and SIGART

  IEEE Technical Committee on Data Base Engineering


  Workshop Program

  This workshop  will  address  the  theoretical  and  practical  issues
  involved  in  making databases more knowledgeable and supportive of AI
  applications.  The tools and techniques  of  database  management  are
  being  used  to  represent  and  manage more complex types of data and
  applications environments.

  The rapid growth of online systems containing text, bibliographic, and
  videotex  databases with their specialized knowledge, and the develop-
  ment of expert systems for scientific, engineering and business appli-
  cations  indicate the need for intelligent database interfaces and new
  database system architectures.

  The workshop will bring together researchers  and  practitioners  from
  academia  and industry to discuss these issues in Plenary Sessions and
  specialized Working Groups.  The Program Committee will invite  40  to
  80  people,  based  on submitted research and application papers (5000
  words) and issue-oriented position papers (2000-3000 words).

  Topics of Interest

  The Program Committee invites papers addressing (but not  limited  to)
  the following areas:

  Knowledge Base Systems                 Knowledge Engineering
  environments                           acquisition
  architectures                          representation
  languages                              design
  hardware                               learning

  Database Specification Methodologies   Constraint and Rule Management
  object-oriented models                 metadata management
  temporal logic                         data dictionaries
  enterprise models                      constraint specification
  transactional databases                 verification, and enforcement

  Reasoning on Large Databases           Expert Database Systems
  fuzzy reasoning                        natural language access
  deductive databases                    domain experts
  semantic query optimization            database design tools
                                         knowledge gateways
                                         industrial applications

  Please send five (5) copies of full papers or position papers by  June
  1, 1984 to:

                Larry Kerschberg, Program Chairperson
                College of Business Administration
                University of South Carolina
                Columbia, SC, 29208
                (803) 777-7159 / (803) 777-5766 (messages)
                USENET: ucbvax!allegra!usceast!kersch
                CSNET:  kersch@scarolina

  Submissions will be considered by the Program Committee:

  Bruce Berra, Syracuse University            Sham Navathe, Univ. of Florida
  James Bezdek, Univ. of South Carolina       Erich Neuhold, Hewlett-Packard
  Michael Brodie, Computer Corp. of America   Stott Parker, UCLA
  Janis Bubenko, Univ. of Stockholm           Michael Stonebraker, UC-Berkeley
  Peter Buneman, Univ. of Pennsylvania        Yannis Vassiliou, New York Univ.
  Antonio L. Furtado, PUC-Rio de Janeiro      Adrian Walker, IBM Research Lab.
  Jonathan King, Symantec                     Bonnie L. Webber, U. of Penn.
  John L. McCarthy, Lawrence Berkeley Lab.    Gio Wiederhold, Stanford Univ.
  John Mylopoulos, University of Toronto      Carlo Zaniolo, AT&T Bell Labs




  Authors will be notified of acceptance or rejection by July 16,  1984.
  Preprints  of  accepted  papers  will  be  available  at the workshop.
  Workshop presentations, discussions, and working group reports will be
  published in book form.



    Workshop General Chairman           Local Arrangements Chairperson

    Donald A. Marchand                  Cathie Hughes-Johnson

    Institute of Information Management, Technology and Policy
    (803) 777-5766

    Working Group Coordinator           Industrial Liaison

    Sham Navathe                        Mas Tsuchiya
    Computer and Information Sciences   TRW 119/1842
    University of Florida               One Space Park Drive
    512 Weil Hall                       Redondo Beach, CA 90278
    Gainesville, FL 32611               (213) 217-6114
    (904) 392-7442



  _________________________________________________________________________
             Response Card (Please mail to address on below)

  Name  ___________________________________________ Telephone _____________

  Organization  ___________________________________________________________

  Address  ________________________________________________________________
  City, State,
  ZIP, and Country ________________________________________________________

       Please check all that apply:

       _____ I intend to submit a research paper.
       _____ I intend to submit an issue-oriented position paper.
       _____ I would like to participate in a working group.
             General Topic Areas _________________________________________
       _____ Not sure I can participate, but please keep me informed.

  Subject of paper ______________________________________________________

  _______________________________________________________________________




                   Cathie Hughes-Johnson
                   Institute of Information Management
                   Technology and Policy
                   College of Business Administration
                   University of South Carolina
                   Columbia, SC 29208

------------------------------

End of AIList Digest
********************

[rdg - changed ← to _ above]
∂21-Apr-84  1143	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #50
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Apr 84  11:41:05 PST
Date: Fri 20 Apr 1984 10:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #50
To: AIList@SRI-AI


AIList Digest           Saturday, 21 Apr 1984      Volume 2 : Issue 50

Today's Topics:
  AI Programming - Characterization & Software Engineering,
  AI Literature - Computer Database & Metaphor and Sociolinguistics &
    Automated Reasoning Book,
  Expert Systems - DARPA Sets Expert System Goals,
  Administrivia - Creation of Pascal Mailing List,
  Humor - Lady Lovelace's Encryption Algorithm,
  Seminars - Model-Based Vision & Robot Design Issues
----------------------------------------------------------------------

Date: Mon 16 Apr 84 13:52:04-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: RE: AI Programming

I think a major difference between most AI programming and most non-AI
programming is that AI programming usually involves implementing
additional layers of interpretation on top of whatever programming
system is being employed.  Any system that needs to reason about its
own actions, its own assumptions, and so on, requires this extra layer
of interpretation. The kinds of programs that I work on--learning
programs--also need to modify themselves as they run.  This helps
explain why LISP is so popular--it provides very good support for
building your own interpreters: the ability to dynamically define new
symbols, the ability to construct arbitrary binding environments, and
the ability to invoke EVAL on arbitrary expressions.  Perhaps LISP
is best viewed as a interpreter language rather than a programming
language.

  --Tom

------------------------------

Date: 16 Apr 84 6:18:16-PST (Mon)
From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax
Subject: Incomplete specifications ...
Article-I.D.: ccivax.111

In reference to the recent discussion about Software
Engineering and incomplete specifications.

   For any new computer system, specifications at some
point must be incomplete.  A computer program is a
new machine -- it's never been constructed before.
So the final details always remain until the end.  This does
not mean that one does not begin construction.  On the
contrary, it seems to this writer that all too often
construction begins before any specifications are written.
What's needed is a middle path.  Design people need
enough requirements and constraints ( specifications )
to start work.  What should be provided is concise
documentation of the requirements and constraints, as
well as documentation of the unknowns and the risk.
Designers as they work will learn more about what is
and is not possible and this information will refine
the specifications.  But holes will remain.  This kind
of "evolutionary" development has been described by
Carl Hewitt in an article entitled "Evolutionary
Programming" in SOFTWARE ENGINEERING, edited by
H. Freeman and P.M. Lewis II (NY: Academic Press, 1980).

   I submit that any computer system development must
be a risk, and that it can only be developed by proceeding
with incomplete specifications.  The complement to this
is that large projects must be reviewed for viability
as knowledge is gained through this evolutionary
growth.  Sometimes it's better to quit before good money
is wasted.

   There's more to this issue that what is written here.
But it is not correct to hold AI programming up as some
sort of magical paradigm that is not subject to rudimentary
engineering discipline.  Software Engineering may indeed
have much to learn from the AI style of programming,
but programming in general has much to learn from engineering
disciplines also.

        Bill Anderson

    ...!ucbvax!amd70!rocksvax!ritcv!ccivax!band
    ...!{allegra | decvax}!rochester!ccivax!band

------------------------------

Date: Tue 17 Apr 84 09:31:17-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: New File on Dialog-- The Computer Database

          [Forward from the Stanford bboard by Laws@SRI-AI.]

The Computer Database is a new file on Dialog which covers computers,
telecommunications and Electronics.  The file went online in January and
covers material from 1983 to date.  The documentation which comes with
the file has a thesaurus which appears to be very up to date in terminlology
for online searching.  The journals indexed include ACM publications, AI,
Industrial Robot, SIAM publications, IEEEE as well as Infoworld, PC World,
Dr. Dobbs, Byte etc.

[...]

Harry

------------------------------

Date: Wed, 18 Apr 1984 13:18:02 EST
From: FF Bottles of Beer on the Wall,...
      <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Discounted Books

     A number of books on metaphor and sociolinguistics that I mentioned
in an earlier message are now on sale by their publisher, the University of
Pennsylvania Press.  The sale catalog is available by writing them at
3933 Walnut Street, Philadelphia, PA 19104.  Minimum order is $10.00.

     Among the items available and of interest to AI researchers are:

Sapir & Crocker, "The Social Use of Metaphor" $8.75 (50% off)
Hymes, "Foundations in Sociolinguistics" $6.97 (30% off)
Kirschenblatt-Gimblett, "Speech Play", $6.00 (70% off)
Weinreich, "On Semantics"  $10.50 (70% off)
Labov, "Sociolinguistic Patterns", $10.00 (60% off)
Maranda & Maranda, "Structural Analysis of Oral Tradition", $8.40 (60%off)

  --Dave Axler

------------------------------

Date: 28-Mar-84 12:33:59-CST (Wed)
From: Larry Wos <Wos@ANL-MCS>
Subject: Automated Reasoning

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

The book, Automated Reasoning:  Introduction and Applications, by
Wos,  Overbeek,  Lusk, and Boyle, is now available from Prentice-
Hall.  It introduces basic concepts by showing how  an  automated
reasoning program can be used to solve various puzzles.  The puz-
zles include the "truthtellers and liars" puzzle that was  exten-
sively  discussed  in  the  Prolog Digest, McCarthy's domino and
checkerboard puzzle, and the billiard ball and balance scale puz-
zle.   The  book  is  written in a somewhat informal style and no
background is required.  It also contains a rigorous treatment of
the  elements of automated reasoning.  The book relies heavily on
examples, includes many exercises, and discusses various applica-
tions  of  automated  reasoning.   The applications include logic
circuit design,  circuit  validation,  research  in  mathematics,
research  in formal logic, control systems, and program verifica-
tion.  Other chapters of the book provide an introduction to Pro-
log  and  to  expert  systems.  The last chapter, "The Art of Au-
tomated Reasoning", gives guidelines for choosing representation,
inference rules, and strategies.

     The book is based on examples actually  solved  by  existing
automated  reasoning  programs.   Certain  of  these programs are
available and portable.  The book can be used as a college  text,
consulted  by  those  who wish to study possible applications, or
simply read by the curious.

It can be ordered directly from Prentice-Hall with a visa
or master card by calling  800-526-0485 and the ISBN number
is  0-13-054446-9 for the soft cover.  The soft cover
is 18.95, and the hard 28.95.

  -- LW

------------------------------

Date: 17-Apr-84 17:24 PST
From: William Daul  OAD / TYMSHARE / McDonnell Douglas 
      <WBD.TYM@OFFICE-2.ARPA>
Subject: DARPA Sets Expert System Goals

From DEFENSE ELECTRONICS (April 1984):

Among the goals established for DARPA's expert systems technology program are
increased storage capacity and reasoning power that can deal with 10,000 rules
and provide 4,000 rule inferences per second for stand-alone systems and 30,000
rules and 12,000 inferences per second for multiple cooperating expert systems.
The program, part of DARPA's strategic computing initiative, is aimed at
achieving a framework to support battle management applications.  The Air
Force's Rome Air Development Center will be issuing RFPs in nine technical
areas: explanation and presentation capability, ability to handle uncertain and
missing knowledge, fusion of information from several sources, flexible control
mechanisms, knowledge acquisition and representation, expansion of knowledge
capacity and extent, enhanced inference capability, exploiting expert systems
on multiprocessor architectures, and development of cooperative distributed
expert systems.  Multiple contract awards are planned for each area, and one
or two additional awards are planned for complete system development.

------------------------------

Date: Wed, 11 Apr 84 8:48:51 EST
From: "Ferd Brundick (VLD/LTTB)" <fsbrn@Brl-Voc.ARPA>
Subject: Creation of new mailing list

Hi,

A new special interest mailing list called info-pascal has been
created.  Enclosed below is the summary for the list.  If you would
like to be added to the list, please check with your local Postmaster
or send a message to info-pascal-request@brl-voc.

                                        dsw, fferd
                                        Fred S. Brundick
                                        aka Pascal Postman
                                        USABRL, APG, MD.
                                        <info-pascal-request@brl-voc>

     -----------------------------------------------------------

INFO-PASCAL@BRL-VOC.ARPA

   This list is intended for people who are interested in the programming
   languages Pascal and Modula-2.  Discussions of any Pascal/Modula-2 imple-
   mentation (from mainframe to micro) are welcome.

   Archives are kept on SIMTEL20 in the files:
      MICRO:<CPM.ARCHIVES>PASCAL-ARCHIV.TXT    (current archives)
      MICRO:<CPM.ARCHIVES>PASCAL.ARCHIV.ymmdd  (older archives)

   All requests to be added to or deleted from this list, problems, questions,
   etc., should be sent to INFO-PASCAL-REQUEST@BRL-VOC.ARPA.

   Coordinator: Frederick S. Brundick <fsbrn@brl-voc.arpa>

------------------------------

Date: 19 Apr 1984 12:34:36-EST
From: walter at mit-htvax
Subject: Seminar - Lady Lovelace's Encryption Algorithm

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                 ANNALS OF COMPUTER SCIENCE SEMINAR
                   DATE:  Friday, April 20th, 1984
                   TIME:  Refreshments  12:00 noon
                  PLACE:  MIT AI Lab 8th Floor Playroom

                 LADY LOVELACE'S ENCRYPTION ALGORITHM

                               ABSTRACT

        Znk loxyz iusv{zkx vxumxgsskx }gy g totkzkktzn3iktz{xλ tuhrk}usgt2
        Rgjλ G{m{yzg Gjg Hλxut Ru|krgik2 jg{mnzkx ul znk vukz Ruxj Hλxut.
        Gy g zkktgmkx2 G{m{yzg joyvrgλkj gyzutoynotm vxu}kyy ot sgznksgzoiy.
        ]nkt ynk }gy komnzkkt g{m{yzg loxyz yg} Ingxrky Hghhgmk-y gtgrλzoigr
        ktmotk2 g igri{rgzotm sginotk zngz }gy znk luxkx{ttkx ul znk sujkxt
        iusv{zkx.  Ot komnzkkt luxzλ3z}u2 ynk zxgtyrgzkj g vgvkx ut znk
        ktmotk lxus Lxktin zu Ktmroyn gjjotm nkx u}t |ur{sotu{y tuzky. Ot
        y{hykw{ktz }xozotmy ynk jkyixohkj znk (ruuv( gtj (y{hxu{zotk(
        iutikvzy g iktz{xλ hkluxk znkox osvrksktzgzout ot krkizxutoi
        jomozgr iusv{zkxy .h{z gy lgx gy O qtu}2 nu}k|kx2 ynk tk|kx joj
        gtλznotm }ozn ktixλvzout/. Rgjλ Ru|krgik gtj Hghhgmk ngj g rutm
        gtj iruyk lxoktjynov gtj ynk }gy g jkjoigzkj vgxztkx ot noy }uxq
        }ozn znk gtgrλzoigr ktmotk.  [tluxz{tgzkrλ ynk }gy nkrj hgiq hλ
        gtzo3lksotoyz gzzoz{jky gtj hλ nkx u}t uhykyyout }ozn mgshrotm ut
        nuxyk xgiky. Rgjλ Ru|krgik jokj ul igtikx gz gmk znoxzλ3yo~. Tu}
        zngz λu{|k jkiujkj znoy skyygmk2 rkz-y grr mkz hgiq zu }uxq.

        This fascinating historical discussion and the
        accompanying Graduate Student Lunch will be hosted
        by Dan Carnese and Maria Gruenewald.

------------------------------

Date: 18 Apr 1984  14:38 EST (Wed)
From: Cobb%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - Model-Based Vision

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                          W. ERIC L. GRIMSON

                         Local Constraints in
               Model Based Recognition and Localization
                           From Sparse Data

                       April 23, 1984   4:00PM
                       NE43-8th floor playroom


  A central characteristic of advanced applications in robotics is the
presence of significant uncertainty about the identities and attitudes
of objects in the workspace of a robot.  The recognition and
localization of an object, from among a set of models, using sparse,
noisy sensory data can be cast as the search for a consistent matching
of the data elements to model elements.  To minimize the computation,
local constraints are needed to limit the portions of the search space
that must be explicitly explored.

  We derive a set of local geometric constraints for both the three
degree of freedom problem of isolated objects in stable positions, and
the general six degree of freedom problem of an object arbitrarily
oriented in space.  We establish that the constraints are complete for
the case of three degrees of freedom, but not for six.  We then show
by combinatorial analysis that the constraints are generally very
effective in restricting the search space and provide estimates for
the number of sparse data points needed to uniquely identify and
isolate the object.  These results are supported by simulations of the
recognition technique under a variety of conditions that also
demonstrate its graceful degradation in the presence of noise.  We
also discuss examples of the technique applied to real data from
several sensory modalities including laser ranging, sonar, and grey
level imaging.


Refreshments:  3:45PM

Host:  Professor Patrick H. Winston

------------------------------

Date: Wed 18 Apr 84 14:43:58-PST
From: PENTLAND@SRI-AI.ARPA
Subject: Seminar - Robot Design Issues

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

WHAT: FOUNDATIONAL ISSUES IN ROBOT DESIGN AND THEIR METHODOLOGICAL CONSEQUENCES
WHO: Stan Rosenschein,  Artificial Intelligence Center, SRI International
SERIES: Issues in Language, Perception and Cognition
WHERE: Room 100, Psychology Dept.
WHEN: Monday April 23, 1:00pm         <- * Note Change *


The design of software which would allow robots to exhibit complex
behavior in realistic physical environments is a central goal of
Artificial Intelligence (AI).  In structuring its approaches to this
problem, AI has over the years been guided by a melange of concepts
from logic, computer programming, and (prominently) by certain
pretheoretic intuitions about mental life and its relationship to
physical events embodied in ordinary "folk psychology."  This talk
presents two contrasting views of how information, perception, and
action might be modeled by a robot designer depending on how seriously
he took "folk psychology."  One view takes the ascription of mental
properties to machines quite seriously and leads to a methodology in
which the abstract entities of folk psychology ("beliefs," "desires,"
"plans," "intentions", etc.)  are realized in a one-for-one fashion as
data structures in the robot program. Frequently these data structures
resemble, in certain ways, the sentences of an interpreted logical
languages in that they are taken to express the "content" of the
belief, desire, etc.  The alternative view does not assume this degree
of mental structure a priori.  Logic may figure prominently, but it is
used chiefly BY THE DESIGNER to define and reason about the
environment and its relation to desired robot behavior. The talk will
suggest an automata-theoretic approach to the content of information
states which sidesteps many of the presuppositions of the folk
psychology.  The implications of such an approach for a systematic
robot software methodology will be discussed, including the
possibility of "organism compilers."  The thesis that AI's reliance on
folk psychology is, on balance, useful will be left unresolved though
certainly not unquestioned.

------------------------------

End of AIList Digest
********************

∂22-Apr-84  1629	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #51
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Apr 84  16:28:13 PST
Date: Sun 22 Apr 1984 15:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #51
To: AIList@SRI-AI


AIList Digest            Sunday, 22 Apr 1984       Volume 2 : Issue 51

Today's Topics:
  AI Tools - Review of LISP Implementations,
  Computational Linguistics - Stemming Algorithms & Survey,
  Linguistics - Use of "and" & Schizophrenic Misuse of Metaphors,
  Correction - Lovelace Encryption Seminar,
  Seminars - Combining Logic and Functional Programming &
    Learning Design Synthesis Expertise
----------------------------------------------------------------------

Date: 20 Apr 84 22:22:44 EST  (Fri)
From: Wayne Stoffel <wes%umcp-cs.csnet@csnet-relay.arpa>
Subject: Review of LISP Implementations

Re: Bill Wong's article on three LISP implementations

He also wrote a series on AI languages that appeared in Microsystems.  All
were 8-bit CP/M implementations.

August 1983, muLisp-80, SuperSoft Lisp, and Stiff Upper Lisp.

December 1983, XLISP, LISP/80, and TLC Lisp.

January 1984, micro-Prolog.

                                W.E. Stoffel

------------------------------

Date: Fri, 20 Apr 84 18:15 EST
From: Ed Fox <fox%vpi.csnet@csnet-relay.arpa>
Subject: Algorithms for word stemming and inverse stemming (generate
         word)?

   [Forwarded from the Arpanet-BBoards distributin by Laws@SRI-AI.]

Please send code, references, comments - about systems which can transform
words to stems and stems to words, in an efficient and effective fashion with
very small tables.  There are a number of stemming algorithms, and some
systems that generate words from root+attribute←information.  I would be
interested in a list of such, and especially of systems that do both in
an integrated fashion.  Preferred are systems that can run under 4.x UNIX.
   Many thanks, Ed Fox (fox.vpi@csnet-relay)

------------------------------

Date: Thu, 19 Apr 84 15:59:18 est
From: crane@harv-10 (Greg Crane)
Subject: foreign language dbases, linguistic analysis, for lang
         word-proc

  [Forwarded from the Arpanet-BBoards distribution by Laws@SRI-AI.]

Linguists, philologists, humanists etc. --

        Are you using a computer for linguistic analysis? Access of
big foreign language data bases (Toronto Old English Dbase, or the
Thesaurus Linguae Graecae for example)? analysis or storage
of variant reading or versions? dictionary projects?

        We have been doing a lot here, but nobody seems to have any
overall picture of what is being done round about. I would like to
find out and think its time those who are doing much the same thing
started talking. Any ideas on where a lot of work is being done and
how to facilitate communication?

                                        Gregory Crane
                                        Classics Department
                                        Harvard University

------------------------------

Date: Fri 20 Apr 84 20:06:52-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Use of "and"

Come on, folks.   When someone says "my brothers and sisters" they do not mean
the intersection of the two sets.   Aside from its legal meaning of "or" which
I mentioned earlier, the English word "and" has at least two more meanings:
logical conjunction, and straight addition (which means union when applied to
sets).   Though I'm willing to be contradicted, I believe that English usage
prefers to intersect predicates rather than sets.   Namely, "tall and fat
people" can mean people who are both tall and fat (intersection), but "tall
people and fat people" means both the set of people who are tall and the set of
people who are fat (union).
                                        - Richard

------------------------------

Date: 16 Apr 84 9:12:00-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: uiucdcs.32300023

>>From watrose!japlaice
>>              There are several philosophical problems with treating
>>      `Indiana and Ohio' as a single entity.
>>              The first is that the Fregean idea that the sense of a sentence
>>      is based on the sense of its parts, which is thought valid by most
>>      philosophers, no longer holds true.
>>              The second is that ... `unicorn', `hairy unicorn', `small,
>>      hairy unicorn' ... are all separate entities ...

On the contrary, the sense of "Indiana and Ohio" is still based on the senses
of "Indiana", "and" and "Ohio", if only we disambiguate "and". The ambiguity
of conjunction is well-known: the same word represents both a set operator and
a logical operator (among others). Which set operator? The formula
        X in ({A} ANDset {B})  <=  (X in {A}) ANDlog (X in {B})
allows ANDset to be either intersection or union. It is only our computational
bias that leads us to confuse the set with the logical operator. The formula
        X in ({A} ANDset {B})  <=>  (X in {A}) ANDlog (X in {B})
forces ANDset to be an intersector.

But we need only distinguish ANDset and ANDlog to preserve Fregean
compositionality; for that, it's immaterial which ANDset we adopt. In any
case, Bertrand Russell's 1908 theory of descriptions (as I read it) seems to
refute strict compositionality (words are meaningless in isolation -- they
acquire meaning in context).

Secondly, I don't recall Quine saying that `unicorn', `hairy unicorn', `small,
hairy unicorn' should all be indistinguishable. They may have the same referent
without having the same meaning.

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign
                                        { ihnp4 | pur-ee } ! uiucdcs ! marcel

------------------------------

Date: 17 Apr 84 7:52:10-PST (Tue)
From: harpo!ulysses!gamma!pyuxww!pyuxss!aaw @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: pyuxss.311

[audi alteram partem]

For some interesting study on understanding of metaphors of the type you
refer to, look into Silvano Arieti (psychiatrist/NYU) work on
schizophrenic misuse of metaphors. It has some deep insights on the
relationship between metaphor and logic.
                        {harpo,houxm,ihnp4}!pyuxss!aaw
                        Aaron Werman

------------------------------

Date: 21 Apr 1984 17:00-PST
From: fc%USC-CSE@USC-ECL.ARPA
Subject: Lovelace Encryption Seminar

With regard to coded messages, I think natural stupidity has replaced
artificial intelligence in this regard. Fortunately, I have a
program to deal with walter's kind. So nobody has to run their
programs, here's an aproximate translation:

                       ------------------------
The first computer programmer was a nineteenth century noblewoman,
Lad Augusta Ada Bron Lovelace, daughter of the poet Lord Bron.
As a teenager, Augusta displaed astonishing prowess in mathematics.
When she was eighteen augusta first saw Charles Babbage's analtical
engine, a calculating machine that was the forerunner of the modern
computer. In eighteen fortytwo, she translated a paper on the
engine from French to Knglish adding her own voluminous notes. In
subse:uent writings she described the "loop" and "subroutine"
concepts a centur before their implementation in electronic
digital computers .but as far as I know, however, she never did
anthing with encrption/. Lad Lovelace and Babbage had a long
and close friendship and she was a dedicated partner in his work
with the analtical engine. Unfortunatel she was held back b
antiyfeminist attitudes and b her own obsession with gambling on
horse races. Lad Lovelace died of cancer at age thirtysix. Now
that ouve decoded this message, let's all get back to work.
                     ---------------------------

Please, Walter, next time you want to get the message out:
#@(& $%& $#(& (↑$% ↑&(#$&%! (& %($( (* ↑&*(*% &%& @&&#&& $#&$&%!
                                        Fred

[The responsibility for forwarding the previous message, and this one,
to the AIList readership rests with me.  -- KIL, AIList-Request@SRI-AI.]

------------------------------

Date: Wed 18 Apr 84 14:13:31-PST
From: SHORT%hp-labs.csnet@csnet-relay.arpa
Subject: Seminar - Combining Logic and Functional Programming

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                          JOSEPH A. GOGUEN
                         SRI International

COMBINING LOGIC AND FUNCTIONAL PROGRAMMING -- WITH EQUALITY, TYPES, MODULES
AND GENERICS TOO!

         Hewlett Packard Computer Colloquium - April 26, 1984

This joint work with J. Meseguer shows how to extend the paradigm of logic
programming with some features that are prominent in current programming
methodology, without sacrificing logical rigor or efficient implementation.
The first and most important of these features is functional programming;
full logical equality provides an elegant way to combine the power of Prolog
(with its logical variables, pattern matching and automatic backtracking)
with that of functional programming (supporting functions and their
composition, as well as strong typing and user definable abstract data types).
An interesting new feature that emerges here is a complete algorithm for
solving equations that contain logical variables; this algorithm uses
"narrowing", a technique from the theory of rewrite rules.  The underlying
logical system here is many-sorted Horn clause logic with equality.  A
useful refinement is "subsorts", which can be seen as an ordering relation
on the set of sorts (usually called "types") of data.  Finally, we provide
generic modules by using methods developed in the specification language
Clear.  These features are all embedded in a language call Eqlog; we
illustrate them with a program for the well-known Missionaries and Cannibals
problem.

Thursday, April 26, 1984                 4:00 p.m.

Hewlett Packard Laboratories
Computer Research Center
1501 Page Mill Road
Palo Alto, CA 94304
5M Conference Room

------------------------------

Date: 19 Apr 84 13:28:14 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Seminar - Learning Design Synthesis Expertise

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


Learning  Design  Synthesis  Expertise  by  Harmonizing  Behaviors with
                            Specifications

Speaker:      Masanobu Watanabe <Watanabe@Rutgers.Arpa>
              NEC Corporation, Tokyo, Japan
              Visiting Researcher, Rutgers University

Series:       Machine Learning Brown Bag Seminar
Date:         Wednesday, April 25, 1984, 12:00-1:30
Location:     Hill Center, Room 254


       VEXED is an expert system which supports interactive circuit design.
    VEXED provides suggestions  regarding  alternative  implementations  of
    circuit modules, as well as warnings regarding conflicting constraints.
    The   interactions  between  a  human  designer  and  the  system  give
    opportunities for the system to learn expertise in design synthesis  by
    monitoring  the  human  designer's  response  to  advice offered by the
    system.  From this point of view, there are two interesting cases.  One
    occurs  when  the  designer  ignores the advice of the system.  Another
    occurs when the system cannot provide any advice but the human designer
    can continue his own design.

       The system has to learn as many things as possible  by  analyzing  a
    single  precious  example,  because  it  is difficult for the system to
    obtain many examples from which to form  a  particular  concept.    The
    problem  space in the module decomposition process can be viewed as one
    with both states consisting of a set of modules  and  operators,  which
    will   be  called  implementation  rules.    This  talk  discusses  the
    implementation rule acquisition task which is intended to formulate  an
    implementation rule at an appropriate level of generality by monitoring
    a   designer's   circuit   implementation.    This  task  is  to  learn
    implementation rules (a kind of operator,  but  not  quite  like  LEX's
    operators),  while  LEX's  task  is  to learn heuristics which serve to
    guide useful operators.

------------------------------

End of AIList Digest
********************

∂24-Apr-84  2250	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #52
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Apr 84  22:50:09 PST
Date: Tue 24 Apr 1984 21:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #52
To: AIList@SRI-AI


AIList Digest           Wednesday, 25 Apr 1984     Volume 2 : Issue 52

Today's Topics:
  AI Tools - Another Microcomputer Lisp,
  Linguistics - Metaphors & Use of "And",
  Journal Announcement --  Data and Knowledge Engineering,
  Seminar - Nondeterminism and Creative Analogies
----------------------------------------------------------------------

Date: Sun 22 Apr 84 22:11:14-PST
From: Sam Hahn (Samuel@Score)
Reply-to: SHahn@SUMEX-AIM.ARPA
Subject: Another microcomputer Lisp

In line with the previous mentions of microcomputer implementations of Lisp,
how about this pointer:

I saw in the current (May) issue of Microsystems an advertisement for
Waltz Lisp, from ProCode International.  "Waltz Lisp is not a toy.  It is the
most complete microcomputer Lisp, including features previously available only
in large Lisp systems.  In fact, Waltz is substantially compatible with Franz
... and is similar to MacLisp and Lisp Machine Lisp."

Does anyone know anything about Waltz?  How about a review?

[further claims:        functions of type lambda, nlambda, lexpr, macro
                        built-in prettyprinting and formatting
                        user control over all aspects of the interpreter
                        complete set of error handling and debugging functions
                        over 250 functions in total                     ]

They're at POBox 7301, Charlottesville, VA  22906.

------------------------------

Date: 17 Apr 84 17:06:46-PST (Tue)
From: harpo!ulysses!burl!clyde!watmath!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: dciem.861

There is a very large literature on metaphor. As a start, try
A. Ortony (Ed.) Metaphor and Thought. New York: Cambridge U Press, 1979.

A new journal called "Metaphor" is being started up with first issue
probably in Jan 1985.  Sorry, I don't have ordering information.

In AI, check out the work of Carbonell.

Once you start getting a few leads, you will be overwhelmed by studies.

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 18 Apr 84 9:22:00-PST (Wed)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: uiucdcs.32300025

You might like to think about partial matching as a step toward analogical
or metaphorical reasoning. Try the following:

Fox, MS and Mostow, DJ  Maximal consistent interpretations of errorful data in
        hierarchically modelled domains. IJCAI-77, 165ff.
Kline, PJ  The superiority of relative criteria in partial matching and
        generalization. IJCAI-81, 296ff

Perhaps also check the growing literature on abductive reasoning, hypothesis
formation, disambiguation, categorization, diagnosis, etc. Some papers I found
most interesting:

Carbonell, JR and Collins, AM  Natural semantics in Artificial Intelligence.
        IJCAI-73, 344ff. { the SCHOLAR system }
Collins, A  et al   Reasoning from incomplete knowledge. In BOBROW & COLLINS's
        book "Representation and Understanding", Academic Press, NY (1975).
        { more on SCHOLAR }
Pople, HE  On the mechanization of abductive logic. IJCAI-73, 147ff



In NL research the role of expectations has become important to expedite
disambiguation. Includes use of attention focusing. Some very well-known work
at Yale on this. See eg papers by Riesbeck and Schank in the book by Waterman
& Hayes-Roth ('78), and by Schank & DeJong in Machine Intelligence 9. Lots of
other work too. Happy wading!

                                        Marcel Schoppers
                                        { ihnp4 | pur-ee } ! uiucdcs ! marcel

------------------------------

Date: 23-Apr-84 21:36 PST
From: Kirk Kelley  <KIRK.TYM@OFFICE-2.ARPA>
Subject: Re: Use of "and"

Hmmm, perhaps a friendly retrieval expert should accept statements from the user
like "Dont tell ME how to think!", deduce that there is some ambiguity of
interpretation over the meaning of a request, and ask for explicit
disambiguation of the troublesome operators after each future request, until the
user decides to pick and live with a single unambiguous interpretation.

 -- kirk

------------------------------

Date: Sun 22 Apr 84 23:24:45-PST
From: Janet Coursey <JVC@SU-SCORE.ARPA>
Subject: "and"

William Gass is expansive but probably incomplete in examining functional
uses of "and" in written literature.  He finds these uses and meanings:  a
conditional, a conjunction, adverbial, to balance or coordinate, finally, in
particular or above all, joint dependency of truth value, in addition,
following in time, following in space, next, equally true, increased emphasis,
sum or total, equivalence of interpretation or "that is to say", to condense,
to skip, suddenness in time, suddenness in space, consequence and cause...
More uses and their wonderful subtleties are presented in the article;
they are more varied than the aiList discussion has yet revealed.
The author is the David May Distinguished University Professor in the
Humanities at Washington University.

Gass, William H.  "And."  Harper's.  February, 1984.

------------------------------

Date: 19 Apr 84 19:01:54-PST (Thu)
From: hplabs!tektronix!ogcvax!metheus!howard @ Ucb-Vax
Subject: Re: Use of "and" - (nf)
Article-I.D.: metheus.237

The "Indiana & Ohio" problem can be explained by a feature of human language
processing which goes on all the time, although we are not often conciously
aware of it.  I refer, of course, to the rejection of contradictory, unlikely,
or impossible interpretations.

The reason we interpret "all customers in Indiana and Ohio" to mean "all
customers in Indiana and *all customers in* Ohio" is that the seemingly
logical interpretation is contradictory and cannot possibly refer to any
customers (regardless of what is in the database).  It is interesting to
note in this connection that some oriental forms of logic require that a
pair of examples be given for each set of things to be described, one of a
thing in the set, the other of a thing out of the set.  This prevents
wasting time with arguments based on the null set, like "All purple cows
made out of neutrinos can fly; all animals that can fly have wings; therefore
all purple cows made out of neutrinos have wings".  An example syllogism:
"Where there is smoke, there is fire.  Here, there is smoke: like in a kitchen,
unlike in a lake.  Therefore, here there is fire."

This rejection is extremely sophisticated, and includes, for example, infinite
loop detection.  An example: how many people would take the obvious "logical"
interpretation of the instructions "Lather. Rinse. Repeat." to be the correct
one?  We all automatically read this as "Lather. Rinse. Repeat the previous
two instructions once." because the other reading doesn't make physical sense.
How many people ever had to THINK about that, consciously, at all?

Also, it is customary to be able to be able to delete redundant or implied
information from a sentence.  Since the three words between stars above are
somewhat redundant, and can be deleted without affected the only reasonable
interpretation of the phrase, it should be O.K. to delete them.

Just more fat on the fire (my, how it sizzle!) from:

        Howard A. Landman
        ogcvax!metheus!howard

------------------------------

Date: Mon, 23 Apr 84 10:40:05 cst
From: Peter Chen <chen%lsu.csnet@csnet-relay.arpa>
Subject: Announcing a new journal --  DATA & KNOWLEDGE ENGINEERING


TITLE OF THE JOURNAL:

    DATA & KNOWLEDGE ENGINEERING

PUBLISHER:

    North-Holland

OBJECTIVES AND COVERAGE:

    Although database systems and knowledge systems have their differences,
they share many common principles.  For example, both are interested in the
representations of real-world phenomena.  Therefore, it is beneficial to
have a common forum for database and knowledgebase systems.

    This new journal will bring together the new advances in database and
knowledgebase areas to the attention of researchers, designers, managers,
administrators, and users.  It will focus on new techniques, tools,
principles, and theories of constructing successful databases or
knowledgebases.  The journal will cover (but not be limited to) the
following topics:

    Representation of Data or Knowledge
    Architecture of Database or Knowledgebase Systems
    Construction of Data/Knowledge Bases
    Applications of Data/Knowledge Bases
    Case Studies and Management Issues

    Besides these technical topics, the journal will also have columns on
conference reports, calendars of events, book review, etc.


CALL FOR PAPERS:

    Original papers in the field of data & knowledge engineering are
welcome.  In the cover letter, the author is required to declare
the originality of the manuscript (i.e.,
no similar versions of the manuscript have been published
or have been submitted elsewhere) and to agree
to the transfer of the copy right
to the publisher once the paper is accepted.

Please submit 5 copies of your manuscript to one of
the Associate Editors in the speciality field or to the regional editor.
Or, if you prefer, mail directly to the Editor-in-Chief.

The following are the addresses of the editors:

(1) Editor-in-Chief:
    Prof. Peter Chen
    Dept. of Computer Science
    Louisiana State University
    Baton Rouge, LA 70803-4020
    (chen%lsu.csnet@csnet-relay.arpa)
    (CSNET: chen@lsu)
    Tel: (504) 388-2482

(2) Associate Editors:

  (a) Data Engineering:

      Prof. Wesley Chu
      Dept. of Computer Science
      U.C.L.A.
      Los Angeles, CA 90024

      Prof. Jane Liu
      Dept. of Computer Science
      University of Illinois
      1304 West Springfield Rd.
      Urbana-Champaign, IL 61801

  (b) Knowledge Engineering:

      Dr. Donald Walker
      Natural-Language and Knowledge-Resource Systems
      SRI International
      Menlo Park, CA 94025
      (During Dr. Walker's transition from SRI International
       to Bell Communications Research, manuscripts should be
       sent to the Editor-in-Chief during the period
       4/15/84 to 10/15/84.)

(3) Regional Editor for Europe:
    Prof. Reind van de Riet
    Dept. of Math. and Computer Science
    Free University
    1081 HU Amsterdam
    The Netherlands


PUBLICATION DATE:

    The journal will be published quarterly, and
    the first issue is planned for the last quarter of 1984.

FOR FURTHER INFORMATION, INSTITUTIONAL SUBCRIPTION, OR A FREE SAMPLE COPY:

    Please contact the publisher:
    (1) In the USA/Canada:
        Elsevier Science Publishing Co., Inc.
        P.O. Box 1663
        Grand Central Station
        N.Y., N.Y. 10163
    (2) In all other countries:
        North-Holland
        P.O. box 1991
        1000 BZ Amsterdam
        The Netherlands

FOR SPECIAL PERSONAL SUBSCRIPTION RATE:
     Please contact the Editor-in-Chief.

FOR SERVING AS THE REFEREE:
     Please send a short note to the Editor-in-Chief or to
     one of the editors and indicate your specialities.

  --Peter Chen (CSNET mailbox: chen@lsu)
              (chen%lsu.csnet@csnet-relay.arpa)

------------------------------

Date: 23 Apr 1984  12:32 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Nondeterminism and Creative Analogies

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                     The Copycat Project:
        An Experiment in Nondeterminism and Creative Analogies

                        Doug Hofstadter

                         AI Revolving Seminar
           Wednesday    4/25    4:00pm  8th floor playroom

A micro-world is described, in which many analogies involving strikingly
different concepts and levels of subtlety can be made.  The question
"What differentiates the good ones from the bad ones?" is discussed,
and then the problem of how to implement a computational model of the
human ability to come up with such analogies (and to have a sense for
their quality) is considered.  A key part of the proposed system, now
under development, is its dependence on statistically emergent properties
of stochastically interacting "codelets" (small pieces of ready-to-run
code created by the system, and selected at random to run with probability
proportional to heuristically assigned "urgencies").  Another key element
is a network of linked concepts of varying levels of "semanticity", in
which activation spreads and indirectly controls the urgencies of new
codelets.  There is pressure in the system toward maximizing the degree
of "semanticity" or "intensionality" of descriptions of structures, but
many such pressures, often conflicting, must interact with one another,
and compromises must be made.  The shifting of (1) perceived boundaries
inside structures, (2) descriptive concepts chosen to apply to structures,
and (3) features perceived as "salient" or not, is called "slippage".
What can slip, and how, are emergent consequences of the interaction
of (1) the temporary ("cytoplasmic") structures involved in the analogy
with (2) the permanent ("Platonic") concepts and links in the conceptual
proximity network, or "slippability network".  The architecture of this
system is postulated as a general architecture suitable for dealing not
only with fluid analogies, but also with other types of abstract perception
and categorization tasks, such as musical perception, scientific theorizing,
Bongard problems and others.

------------------------------

End of AIList Digest
********************

∂28-Apr-84  1704	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #53
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Apr 84  17:03:25 PST
Date: Sat 28 Apr 1984 15:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #53
To: AIList@SRI-AI


AIList Digest            Sunday, 29 Apr 1984       Volume 2 : Issue 53

Today's Topics:
  References - AI and Legal Systems,
  Linguistics - "Unless" & "And" & Metaphors,
  Jobs - Noncompetition Clauses,
  Seminars - System Identification & Chunking and R1-SOAR
----------------------------------------------------------------------

Date: Wed, 25 Apr 84 19:34 MST
From: Kip Cole <KCole@HIS-PHOENIX-MULTICS.ARPA>
Subject: Pointers to AI and Legal Systems

Some time ago there was a request for pointers to references on Legal
Information Systems and AI.  I have the following which I can recommend:

1.  Deontic Logic, Computational Linguistics & Legal Info.  Systems.
Martino ed., published by North Holland.

2.  AI and Legal Information Systems.  Campi ed.  published by North
Holland.

Both books are papers presented at a conference in Italy on said topics.

Kip Cole, Honeywell Australia.

------------------------------

Date: Wed 25 Apr 84 16:51:30-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Meanings of "Unless"

Multiple meanings for English connectives:

I once read a paper in which it was seriously alleged that the word "unless"
has in excess of 4,000 (that's four thousand) potential logically distinct
meanings when used in writing an English law.   Sorry, I don't have the
reference, nor can I remember very many of the meanings.
                                                                - Richard

------------------------------

Date: 20 Apr 84 12:52:40-PST (Fri)
From: harpo!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: Customers in Ohio and Indiana...
Article-I.D.: eosp1.808

One point of view seems to have been neglected in this discussion.
Suppose we build programs smart enough to "realize" that 'Ohio and Indiana'
really means 'Ohio <inclusive or> Indiana'.  Then what happens to the poor
user who really means 'Ohio AND Indiana'??  Suppose the original poor user
in this story had been trying to weed out duplicate accounts?

It seems to me that the best you can do is:
        (a) Make a semantic decision based upon a much larger context of
        what the user is doing, or:
        (b) Catch the ambiguity and ask the user to clarify.  We can
        deduce from the original story that many users will become furious if
        asked to clarify, by the way.
                                        - Toby Robison (not Robinson!)
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: 19 Apr 84 6:53:00-PST (Thu)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: uicsl.15500032

Here are some sources for metaphor:

1. A book edited by Andrew Ortony. The title is
Metaphor and Thought.  There are several good articles in this book, and I
recommend it as a good place to start, but not as the last word.

2. The Psychology Dept. at Adelphi University has been sporadically
putting out a mimeo entitled: The Metaphor Research Newsletter.  The latest
edition (which arrived today) indicates that as of January 1985 it will
become a full fledged journal called Metaphor, published by Erlbaum.

3. Dedre Gentner (of BBN) has been doing assorted work on metaphor.

4. Metaphors We Live By, by George Lakoff and Mark Johnson, is fun to read.
They have some good ideas, but they do tend to make too big a deal out of
them.  I think its worth reading.


As far as your claim that "man is a BIC pen" is a "bad" metaphor, I
tend to shy away from such a coarse-grained term.  For me, metaphors
may be more and less apt, more and less informative, more and less
novel, more and less easily understood, etc.  In this particular
example human beings are so complex that there is almost no object that
they cannot be compared to -- however strained the interpretation may
be.  BIC pens are well known (thanks to simple construction and a good
advertising agency) for their reliability and being able to withstand
unreasonable punishment (like being strapped to the heel of an Olympic
figure skater).  Similarly, humankind throughout the ages has
successfully held up under all kinds of evolutionary torture, yet we
continue (as a species) to function.  Now this interpretation may seem
a little bizarre to you, but to me it seemed to come almost
instantaneously and quite naturally.  Can you truly say it is "bad?"

Even an example as silly sounding (at first) as "telephones are like
grapefruits" yields to the great creative power of the human mind.  Despite
their simple outer appearance, they both conceal a more complex inner
structure (which, as a youngster, I delighted to dissect).  Both are
"machines" for reproducing something -- the telephone reproduces sounds,
while the grapefruit reproduces grapefruits (this one admittedly took a few
seconds more to think of).  So what's a "bad" metaphor?


I would love to continue this discussion with interested parties privately,
so as not to take up space in the notesfile.  USENET mail can reach me at
...!uiucdcs!uicsl!dinitz

-Rick Dinitz

------------------------------

Date: 21 Apr 84 12:37:35-PST (Sat)
From: hplabs!hao!cires!nbires!opus!rcd @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: opus.386

A question on non-competition clauses of the sort in which you agree, if
you leave a particular job, not to work in that field (or geographical
area), etc.: I once heard that they were essentially not enforceable, the
given reason being that there are certain legal rights you can't give up
(called "unconscionable clauses" in a contract), and that somehow "giving
up the right to make a living (in your profession) fell into this class.
I don't know if this is actually true - I'd like to hear a qualified
opinion from someone who understands the law or who has been through a case
of this sort.

In any event, I think it's pretty shoddy for an employer to make such
requests of an employee - this is going a long way beyond assigning patent
rights while you're employed and not disclosing company secrets.  If the
employer mistrusts you that much, can you trust him?  I also think it's
foolish to agree in writing to something that you don't accept, on the
basis that you don't think they'll use it or that it isn't enforceable.
Don't bet against yourself!

...Are you making this up as you go along?              Dick Dunn
{hao,ucbvax,allegra}!nbires!rcd                         (303) 444-5710 x3086

------------------------------

Date: 22 Apr 84 3:09:20-PST (Sun)
From: decvax!cca!ima!inmet!andrew @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: inmet.1313

A few years back, I made the mistake of working for Computervision.  They
tried to force me to sign an agreement that I a) would not work for any
competitors for 18 months, and b) would not "entice" other employees into
leaving for an equal length of time.  They didn't say a word about continuing
my salary for that time, either!

The incompetents in Personnel (I'd call them "morons", but true morons are
considerably more conscientious workers) didn't notice that I never signed
or returned the above agreement, though!

Andrew W. Rogers, Intermetrics   ...{harpo|ihnp4|ima|esquire}!inmet!andrew

------------------------------

Date: 23 Apr 84 7:49:14-PST (Mon)
From: hplabs!hao!seismo!ut-sally!ut-ngp!werner @ Ucb-Vax
Subject: Re: Non-competition clauses. The devils advocate speaks
Article-I.D.: ut-ngp.531

My personal opinion aside, I do have sympathy with the company that
reveals their "secrets" to an employee, only to have him turn into
competition without having to pay the research costs.  Remember also,
that for every one who leaves, there are five guys who stay, and more
likely than not the result of their years of work get 'abducted' also,
as, in a decent research effort, the work is done in a team rather than
by solo-artists.

So, after your flamers burn out, lets hear some ideas which take care
of the interests of all parties, because remember, one day it may be
YOU who stays behind, or YOU who may be the founder of a 3-man think-tank.

        werner @ ut-ngp         "every coin has, AT LEAST, 3 sides"

------------------------------

Date: 23 Apr 84 9:35:44-PST (Mon)
From: harpo!ulysses!gamma!epsilon!mb2c!mpr @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: mb2c.242

Non-competition clauses may or may not be enforceable. It depends on
the skill of the party involved or any special knowledge that person
might have.  For example, it is is not a shoddy practice or expectation
for Coca Cola to expect its personnel not to work for Pepsi cola,
especially and only if they have knowledge of the secret formula.

------------------------------

Date: 24 Apr 84 12:27:53-PST (Tue)
From: decvax!mcnc!unc!ulysses!gamma!pyuxww!pyuxa!wetcw @ Ucb-Vax
Subject: Re: Non-competition clauses (The Doctor)
Article-I.D.: pyuxa.710

In reference to the Doctor in Boulder.

The Doctor had joined a practicing group in a clinic.  He did
indeed sign a contract which contained a clause which said that
if he left the clinic, he would be unable to practice in the
county in which Boulder is in.

It seems that after several years, the administrator of the
clinic (not a Doctor) decided that Doctor X was not bringing
in enough cash to the group.  The Doctor was warned that he
would have to increase his patient load to bring his revenues
up to what they thought it should be.  The Doctor refused to
compromise his patients care by giving them less time.  After
a standoff, the administrator and the other Doctors told Doctor
X he would have to leave the clinic.

The crux of the problem was that he did not leave on his own, but
was asked to leave and therefore believed that the non-comp
clause invalid.  He opened an office in Boulder.  Many on his
former patients followed him, much to the displeasure of the
clinic crowd.  The clinic then decided to go to court.  They
won in court so that Doctor X had to move his practice out of the
county.  The patients still followed him.

I think that this case is working its way up to the Supreme
Court.  The whole affair was aired last year on [60 Minutes].  The
clinic crew and their administrative lackey came off in a
very bad light.  They were arrogant and seemed self serving
to the nth degree.  I hope Doc X wins in the final analysis.
In the meantime, there was a time-limit clause in the contract
which lapses sometime soon.
T. C. Wheeler

------------------------------

Date: 24 Apr 84 20:51:48 PST (Tuesday)
From: Bruce Hamilton <Hamilton.ES@XEROX.ARPA>
Reply-to: Hamilton.ES@XEROX.ARPA
Subject: Seminar - Learning About Systems That Contain State Variables

The research described below sounds closer to what I had in mind when I
raised this issue a couple of weeks ago, as opposed to the
automata-theoretic responses I tended to get.  --Bruce

[For more leads on learning "systems containing state variables", readers
should look into that branch of control theory known as system identification.
Be prepared to deal with some hairy mathematical notation.  -- KIL]


  Date: 24 Apr 84 11:39 PST
  From: mittal.pa
  Subject: Reminder: CSDG Today

  The CSDG today will be given by Tom Dietterich, Stanford University,
  based on his thesis research work.
  Time etc: Tuesday, Apr. 24, 4pm, Twin Conf. Rm (1500)

Learning About Systems That Contain State Variables

It is difficult to learn about systems that contain state variables when
those variables are not directly observable.  This talk will present an
analysis of this learning problem and describe a method, called the
ITERATIVE EXTENSION METHOD, for solving it.  In the iterative extension
method, the learner gradually constructs a partial theory of the
state-containing system.  At each stage, the learner applies this
partial theory to interpret the I/O behavior of the system and obtain
additional constraints on the structure and values of its state
variables.  These constraints trigger heuristics that hypothesize
additional internal state variables.   The improved theory can then be
applied to interpret more complex I/O behavior.  This process continues
until a theory of the entire system is obtained.  Several conditions
sufficient to guarantee the success of the method will be presented.
The method is being implemented and applied to the problem of learning
UNIX file system commands by observing a tutorial interaction with UNIX.

------------------------------

Date: 19 Apr 1984 1326-EST
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Seminar - Chunking and R1-SOAR

          [Forwarded from the CMU-AI bboard by Laws@SRI-AI.]


           "RECENT PROGRESS IN SOAR: CHUNKING AND R1-SOAR"
                   by John Laird & Paul Rosenbloom

          AI Seminar,  Tuesday April 24,  4.00pm, Room 5409

In this talk we present recent progress in the development of the @p[Soar]
problem-solving architecture as a general cognitive architecture.  This work
consists of first steps toward: (1) an architecture that can learn about all
aspects of its own behavior (by extending chunking to be a general learning
mechanism for @p[Soar]); and (2) demonstrating that @p[Soar] is (more than)
adequate as a basis for knowledge-intensive (expert systems) programs.

Until now chunking has been a mechanism that could speed up simple
psychological tasks, providing a model of how people improve their
performance via practice.  By combining chunking with @p[Soar], we show how
chunking can do the same for AI tasks such as the Eight Puzzle, Tic-Tac-Toe,
and a portion of an expert system.  More importantly, we present partial
demonstrations: (1) that chunking can lead to more complex forms of
learning, such as the transfer of learned behavior (that is, the learning of
generalized information), and strategy acquisition; and (2) that it is
possible to build a general problem solver that can learn about all aspects
of its own behavior.

Knowlege-intensive programs are built in @p[Soar] by representing basic task
knowledge as problem spaces, with expertise showing up as rules that guide
complex problem-space searches and substitute for expensive problem-space
operators.  Implementing a knowledge-intensive system within @p[Soar] begins
to show how: (1) a general problem-solving architecture can work at the
knowledge intensive (expert system) end of the problem solving spectrum; (2)
it can integrate basic reasoning and expertise, using both search and
knowledge when relevant; and (3) it can perform knowledge acquisition by
transforming computationally intensive problem solving into efficient
expertise-level rules (via chunking).  This approach is demonstrated on a
portion of the expert system @p[R1], which configures computers.

------------------------------

End of AIList Digest
********************

∂03-May-84  1104	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #54
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 May 84  11:02:55 PDT
Date: Thu  3 May 1984 10:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #54
To: AIList@SRI-AI


AIList Digest            Thursday, 3 May 1984      Volume 2 : Issue 54

Today's Topics:
  Literature Search - Applications of Expert Systems Proceedings,
  AI News - The End of British AI???,
  Linguistics - Metaphor and Riddles,
  AI Programming - Discussion,
  AI Jobs - Noncompetition Clauses,
  Seminars - Multiple Inheritance & Perceptual Organization
----------------------------------------------------------------------

Date: 27 Apr 84 9:46:54-PST (Fri)
From: decvax!linus!vaxine!chb @ Ucb-Vax
Subject: Looking for Applications of Expert Sys. Proceedings
Article-I.D.: vaxine.250

In Bruce Buchanan's Partial Bibliography on Expert Systems (Nov. 82)
he cited the Proceedings for the Colloquium on Application of Knowledge
Based (or Expert) Systems, London, 1982.  Does anybody out in netland
know who sponsored this colloquium or, more importantly, how I can get
a hold of these proceedings?

Thanks in advance,

                        Charlie Berg
                        Expert Systems
                        Automatix, Inc.
                     ...{allegra, linus}!vaxine!chb

------------------------------

Date: Mon 30 Apr 84 14:04:33-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: The End of British AI???

The ``New Scientist'' of April 12 quotes David Thomas, director for
Information Technology at the Science and Engineering Research Council
(British equivalent of NSF) and director of the Intelligent
Knowledge-Based Systems (British codeterm for AI) programme of the
Department of Trade and Industry:

        ``If computer scientists want to do research they must do it
        in partnership with industry... WE DON'T WANT COMPUTER
        SCIENTISTS working alone with no common aim in sight, and
        PUBLISHING THEIR WORK IN AN ACADEMIC JOURNAL for the Japanese
        to  pick up on ... It is difficult to think of anything in
        computer science which would not be useful to industry.''

(emphasis mine).

Yours, at a loss for printable comments,

-- Fernando Pereira

------------------------------

Date: Mon, 30 Apr 1984 14:14:30 EDT
From: Another Memo from the Etherial Plane
      <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Metaphor & Riddles

     The new issue of the Journal of American Folklore contains an article
on the riddling process and its relation to metaphor interpretation, written
by Green & Peppicello.  The article also contains an excellent bibliography.

------------------------------

Date: 26 Apr 84 6:06:00-PST (Thu)
From: harpo!ulysses!gamma!pyuxww!pyuxss!aaw @ Ucb-Vax
Subject: Re: RE: AI Programming
Article-I.D.: pyuxss.319

I strongly agree that AI programming tends to be on several levels,
but rather than seeing AI programs as a controller or generator and
and a pragmatic level, I think many AI programs are three levels:

  1. organizer, based on feedback from heuristic controller(2)

  2. controller, based on results of algorithmic or applicative level(3)

  3. worker, playing with real data

        The raison d' might be that most programs <5k statements are
pure applications, programs getting much larger tend to need a single
intelligent controller, while programs in the 20k-100k statement range
(the AI programming thesis level) are in the three level range. All AI
programs bigger than that tend to algorithmic refinements of previous
work, with refiners in terror of changing the basic structure.
                        {harpo,houxm,ihnp4}!pyuxss!aaw
                        Aaron Werman

------------------------------

Date: 25 Apr 84 7:19:04-PST (Wed)
From: harpo!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: Non-competition clauses - (nf)
Article-I.D.: eosp1.812

I'm amazed at the naivete of people suggesting that an employer has
no good reason to ask people to sign non-competition clauses.  Most
employers allow many, if not most of their employees to have access to
sensitive and trade secret information.  Employees leave a company with
their heads full of such data, and they become a walking time bomb to
their previous employer, should this info fall into the hands of a
competitor.

History shows that many ex-employees are unscrupulous in this regard.  IBM
has sued successfully in cases where ex-employees have formed, or joined,
other companies to build hardware that is very similar to hardware the
employees were building at IBM.  In many of these cases IBM has won,
presumably demonstrating that the employees were using more than their
own skills to imitate IBM's projects.

By the way, the classic example of this type of problem is a list of
customers.  A company's customer list is in many cases a critical secret,
and companies oftem sue to prevent an ex-employee from taking the list to his
next company, or using it himself.

Perhaps many of the writers on this subject are from academic environments
and have not worked in technologically competetive companies.
Why don't you try the other end of this problem -- imagine yourself working
for such a company, for which you don't sign a competetive agreement.
Then agree also that you will not have access to the company's sensitive and
trade secret data, so that the company will genuinely not need you to sign
such an agreement.  Then just try to get your work done without access to
important meetings and specifications.

Non-competetive agreements often specify very long periods of time, or no
specific time frame at all.  I believe that time periods over two years
are unenforceable in general.

By the way, when you join a company, you usually make personal data
available to it, which the company undertakes to keep secret,
and not to use after you have left the company.  This is a 2-way
street.
                                        - Toby Robison (not Robinson!)
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: 29 Apr 1984  21:02 EDT (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Multiple Inheritance

               [Forwarded from the MIT bboard by SASW@MIT-MC.]


                   Multiple Inheritance: What, Why, and How?

                                  Dan Carnese


                             AI Revolving Seminar
                 Wednesday, May 2, 4:00pm, 8th Floor Playroom

This  talk  is  concerned  with  type  definition  by ``multiple inheritance''.
Informally, multiple inheritance is a  technique  for  defining  new  types  by
combining the operation sets of a number of old ones.

The  literature  concerning multiple inheritance has been heavily biased toward
the description of the constructs involved  in  particular  systems.    But  no
satisfying account has been given of:
   - the  rationale  for  using  definition  by  multiple inheritance over
     simpler approaches to type definition,
   - the essential similarities of the various proposals, or
   - the  key  design  decisions  involved  in  these  systems   and   the
     significance of choosing specific alternatives.

The  goal  of  this  talk  is  to  dissipate  some  of the ``general prevailing
mysticism'' surrounding multiple inheritance.    The  fundamental  contribution
will  be  a  simple  framework  for describing the design and implementation of
single-inheritance and multiple-inheritance type systems.  This framework  will
be  used  to  describe  the  inheritance mechanisms of a number of contemporary
languages.  These include:
   - the Lisp Machine's flavor system
   - the classes of Smalltalk-80, ``Smalltalk-82'' (Borning and  Ingalls),
     and Loops (Bobrow and Stefik)
   - the ``traits'' extension to Mesa (Curry et al.)

Given  the  description  of  the ``what'' and ``how'' of these systems, we will
then turn  to  the  question  of  ``why.''    Some  principles  for  evaluating
inheritance mechanisms will be presented and applied to the above five designs.
A  few simple improvements to the Lisp Machine flavor system will be identified
and motivated by the evaluation criteria.

We will conclude by discussing the relationship between multiple inheritance in
programming and multiple  inheritance  in  knowledge  representation,  and  the
lessons from the former which can be applied to the latter.

------------------------------

Date: 30 Apr 1984  09:23 EDT (Mon)
From: Cobb%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - Perceptual Organization and Visual Recognition

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


      The Use of Perceptual Organization for Visual Recognition

                              DAVID LOWE

                        May 7, 1984    4:00PM
                      NE43 - 8th floor Playroom


     The human visual system has the capability to spontaneously
derive groupings and structures from an image without higher-level
knowledge of its contents.  This capacity for perceptual organization
is currently missing from most computer vision systems.  It will be
shown that perceptual groupings can play at least three important
roles in visual recognition: 1) image segmentation, 2) direct
inference of three-space relations, and 3) indexing world knowledge
for subsequent matching.  These functions are based upon the
expectation that groupings reflect actual structure of the scene
rather than accidental alignment of image elements.  A number of
principles of perceptual organization will be derived from this
criterion of non-accidentalness and from the need to limit
computational complexity.  The use of perceptual groupings will be
demonstrated for segmenting image curves and for the direct inference
of three-space properties from the image.

     Much computer vision research has been based on the assumption
that recognition will proceed bottom-up from the image to an
intermediate 2-1/2D sketch or intrinsic image representation, and
subsequently to model-based recognition.  While perceptual groupings
can contribute to this intermediate representation, they can also
provide an alternate pathway to recognition for those cases in which
there is insufficient information for deriving the 2-1/2D sketch.
Methods will be presented for using perceptual groupings to index
world knowledge and for subsequently matching three-dimensional models
directly to the image for verification.  Examples will be given in
which this alternative pathway seems to be the only possible route to
recognition.  A functioning real-time vision system will be described
that is based upon the direct search for the projections of 3D models
in an image.

Refreshments:  3:45PM
Host:  Professor Patrick H. Winston

------------------------------

End of AIList Digest
********************

∂04-May-84  2111	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #55
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 May 84  21:11:25 PDT
Date: Fri  4 May 1984 19:54-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #55
To: AIList@SRI-AI


AIList Digest            Saturday, 5 May 1984      Volume 2 : Issue 55

Today's Topics:
  AI Support - The End of British AI?,
  Expert Systems - English Conference Reference,
  AI Jobs - Noncompetition Clauses,
  Review - HEURISTICS by Judea Pearl,
  Humor - Computers and Incomprehensibility,
  Consciousness - Reply to Phaedrus (long)
----------------------------------------------------------------------

Date: Thu 3 May 84 11:30:40-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: The End of British AI?

The End of British AI?

I think Mr. Pereira is being more than a little paranoid here (and he need not
imagine that AI research is the only type for which industry sometimes shows
little enthusiasm).   That pronouncement sounds as if it was politically
motivated, therefore not to be taken too literally anyway, and will be
forgotten as soon as convenient.   Not that I think my government's policy on
computer science research is sound -- quite the reverse -- but I don't think it
has suddenly become a lot worse.
                                                - Richard

------------------------------

Date: 30 Apr 84 8:07:16-PDT (Mon)
From: decvax!decwrl!rhea!bartok!shubin @ Ucb-Vax
Subject: Info on Expert Systems conference in England
Article-I.D.: decwrl.7512

|  In Bruce Buchanan's Partial Bibliography on Expert Systems (Nov. 82)
|  he cited the Proceedings for the Colloquium on Application of Knowledge
|  Based (or Expert) Systems, London, 1982.  Does anybody out in netland
|  know who sponsored this colloquium or, more importantly, how I can get
|  a hold of these proceedings?
|                  Charlie Berg
|                  Expert Systems
|                  Automatix, Inc.
|               ...{allegra, linus}!vaxine!chb

We gave a paper at a conference called "Theory and Practice of Knowledge
Based Systems", which was held 14-16 Sep 82 at Brunel University, which is
*near* London.  The chair of the conference was Dr. Tom Addis, also of
Brunel University.  The conference was sponsored (or approved or whatever)
by ACM, IEEE and SPL International.

I found two addresses.  The first is where the conference was, and (I
believe) the second is where the Computer Science department is:
        Brunel University
        Shoreditch Campus
        Coopers Hill, Englefield Green
        Egham, Surrey
        ENGLAND

or      Brunel University
        Department of Computer Science
        Uxbridge, Middlesex
        ENGLAND

hal shubin
        UUCP:    ...!decwrl!rhea!bartok!shubin
        ARPAnet: hshubin@DEC-MARLBORO

------------------------------

Date: Fri, 4 May 84 10:51 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: Non-competition clauses

You have constructed a very good argument for nondisclosure agreements.
The issue, however, was non-competition clauses, for which your only
justification seem to be that "[h]istory shows that many ex-employees
are unscrupulous. . .".  I find this less than compelling.

The successful legal actions you cite demonstrate that recourse is
available to the company damaged by such actions by ex-employees.  The
risk that *full* compensation for such damage may not be forthcoming is
a risk of doing business, and must be managed as such.

By the way, nondisclosure of personal data by the company is much more
closely analogous to nondisclosure of proprietary information by the
employee than it is to noncompetition by the employee.  (Do you think I
could talk Xerox into agreeing not to employ anyone in my present
capacity for two years if I should leave?)

Mark

------------------------------

Date: Fri, 4 May 84 15:32:32 PDT
From: Anna Gibbons <anna@UCLA-CS.ARPA>
Subject: HEURISTICS/Dr. Judea Pearl

FROM: Judea Pearl@UCLA-SECURITY.
Those who have inquired about my new book "HEURISTICS", may be
interested to know that it is finally out, and can be obtained from
Addison-Wesley Publishing Company, Reading Mass. 01867, Tel.
(617) 944-8660.  The title is "Heuristics: Intelligence Search
Strategies for Computer Problem Solving", the ISBN number is
0-201-05594-5, and the price 38.95.  For those unfamiliar with the
book's content, the following are excerpts from the cover description.

This book presents, characterizes, and analyzes problem solving
strategies that are guided by heuristic information.  It provides a
bridge between heuristic methods developed in artificial intelligence,
optimization techniques used in operations research, and
complexity-analysis tools developed by computer theorists and
mathematicians.

The book is intended to serve both as a textbook for classes in AI
Control Strategies and as a reference for the professional/researcher
who seeks an in-depth understanding of the power of heuristics and
their impact on various performance characteristics.

In addition to a tutorial introduction of standard heuristic search
methods and their properties, the book presents a large collection of
new results which have not appeared in book form before.  These include:

*  Algorithmic taxonomy of basic search strategies, such as
backtracking, best-first, and hill-climbing, their variations and
hybrid combinations.

*  Searching with distributions and with nonadditive evaluation
functions.

*  The origin of heuristic information and the prospects for automatic
discovery of heuristics.

*  Applications of branching processes to the analysis of path-seeking
algorithms.

*  The effect of errors on the complexity of heuristic search.

*  The duality between games and mazes.

*  Recreational aspects of recursive minimaxing.

*  Average performance analysis of game-playing strategies.

*  The benefits and pitfalls of look-ahead.

Each chapter contains annotated references to the literature and a
set of nontrivial exercises chosen to enhance skill, insight, and
curiosity.

Enjoy your reading and, please, let me know if you have suggestions
for improving the form or content.  Judea Pearl @ UCLA-SECURITY.

------------------------------

Date: 3 May 1984 20:50:55-EDT
From: walter at mit-htvax
Subject: Seminar - Computers and Incomprehensibility

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


               GRADUAL STUDENT LUNCH SEMINAR SERIES

                         The G0001 Project:
             An Experiment in G0002 and Creative G0003


A G0004 is described, in which many G0003 involving strikingly
different G0005 and levels of G0006 can be made.  The question "What
differentiates the good G0003 from the bad G0003?" is discussed, and
the problem of how to G0008 a G0009 G0010 of the G0011 G0012 to come
up with such G0003 (and to have a sense for their quality) is
considered.  A key part of the proposed system, now under
development, is its dependence on G0013 G0014 G0015 of G0016
interacting "G0017" (selected at random to G0019 with G0020
proportional to G0021 assigned "G0022").  Another key G0023 is a
G0024 of linked G0005 of varying levels of "G0025", in which G0026
spreads and G0027 controls the G0028 of new G0017. The shifting of
(1) G0033 G0034 inside structures, (2) descriptive G0005 chosen to
apply to G0030, and (3) G0043 perceived as "G0031" or not, is called
"G0032". What can G0031, and how, are G0014 G0033 of the interaction
of (1) the temporary ("G0034") structures involved in the G0003 with
(2) the permanent ("G0035") G0005 and links in the G0036 network, or
"G0037 network".  The G0038 of this system is G0039 as a general
G0038 suitable for dealing not only with fluid G0003, but also with
other types of G0039 G0040 and G0041 tasks, such as musical G0040,
G0041 G0042, Bongard problems and others.


12:00 NOON  8TH FLOOR PLAYROOM                         FRIDAY 5/5
Hosts: Harry Voorhees and Dave Siegel

------------------------------

Date: 27 Apr 84 20:51:58-PST (Fri)
From: harpo!ulysses!burl!clyde!akgua!sdcsvax!davidson @ Ucb-Vax
Subject: Re: New topic for discussion (long)
Article-I.D.: sdcsvax.736

This is a response to the submission by Phaedrus at the University of
Maryland concerning speculations about the nature of conscious beings.
I would like to take some of the points in his/her submission and treat
them very skeptically.  My personal bias is that the nature of
conscious experience is still obscure, and that current theoretical
attempts to deal with the issue are far off the mark.  I recommend
reading the book ``The Mind's Eye'' (Hofstadter & Dennett, eds.)  for
some marvelous thought experiments which (for me) debunk most current
theories, including the one referred to by Phaedrus.  The quoted
passages which I am criticizing are excerpted from an article by J. R.
Lucas entitled ``Minds, Machines, and Goedel'' which was excerpted in
Hofstadter's Goedel, Escher, Bach and found there by Phaedrus.

        the concept of a conscious being is, implicitly, realized to be
        different from that of an unconscious object

This statement begs the question.  No rule is given to distinguish conscious
and unconscious objects, nothing is said about the nature of either, and
nothing indicates that consciousness is or is not a property of all or no
objects.

        In saying that a conscious being knows something we are saying not
        only does he know it, but he knows that he knows it, and that he
        knows that he knows that he knows it, and so on ....

First, I don't accept the claim that people possess this meta-knowlege more
than a (small) finite number of levels deep at any time, nor do I accept
that human beings frequently engage in such meta-awareness; just because
human beings can pursue this abstraction process arbitrarily deeply (but
they get lost fairly quickly, in practice), does not mean that there is any
process or structure of infinite extent present.

Second, such a recursive process is straightforward to simulate on a
computer, or imbue an AI system with.  I don't see any reason to regard such
systems as being conscious, even though they do it better than we do (they
don't have our short term memory limitations).

        we insist that a conscious being is a unity, and though we talk
        about parts of our mind, we do so only as a metaphor, and will not
        allow it to be taken literally.

Well, this is hardly in accord with my experience.  I often become aware of
having been persuing parallel thought trains, but until they merge back
together again, neither was particularly aware of the other.  Marvin Minsky
once said the same thing after a talk claiming that the conscious mind is
inherently serial.  Superficially, introspection may seem to show a unitary
process, but more careful introspection dissolves this notion.

        The paradoxes of consciousness arise because a conscious being can
        be aware of itself, as well as of other things, and yet cannot
        really be construed as being divisible into parts.

The word ``aware'' is an implicit reference to the unknown mechanism of
consciousness.  This is part of the apparent paradox.  Again, there's
nothing mysterious about a system having a model of itself and being able to
do reasoning on that model the same way it does reasoning on other models.
Also again, nothing here supports the claim that the conscious mind is not
divisible.

        It means that a conscious being can deal with Godelian questions in
        a way in which a machine cannot, because a conscious being can
        consider itself and its performance and yet not be other than that
        which did the performance.

Whatever the conscious mind is, it appears to be housed in a physical
information processing system, to wit, the human brain.  If our current
understanding about the kind of information processing brains are capable of
is correct, brains fall into the class of automata and cannot ultimately do
any processing task that cannot be done with a computer.  The conscious mind
can scrutinize its internal workings to an extent, but so can computer
programs.  Presumably the Goedelian & (more to the point) Turing limitations
apply in principle to both.

        no extra part is required to do this:  it is already complete, and
        has no Achilles' heel.

This is an unsupported statement.  The whole line of reasoning is rather
loose; perhaps the author simply finds it psychologically difficult to
suppose that he has any fundamental limitations.

        When we increase the complexity of our machines, there may, perhaps,
        be surprises in store for us....  Below a certain ``critical'' size,
        nothing much happens....  Turing is suggesting that it is only a
        matter of complexity [before?] a qualitative difference appears.

Well, its very easy to build machines that are infeasible to predict.  Such
machines do not even have to be very complex in construction to be highly
complex in behavior.  Las Vegas is full of many examples of such machines.
The idea that complexity in itself can result in a system able to escape
Goedelian and Turing limitations is directly contradicted by the
mathematical induction used in their proofs:  The limitations apply to
<<arbitrary>> automata, not just to automata simple enough for us to
inspect.

Charlatans can claim any properties they want for mechanisms too complex for
direct disproofs, but one need not work hard before dismissing them with
indirect disproofs.  This is why the patent office rejects claimed perpetual
motion machines which supposedly operate merely by the complexities of their
mechanical or electromagnetic design.  It is also why journals of
mathematics reject ridiculously long proofs which claim to supply methods of
squaring the circle, etc.  No one examines such proofs to find the flaw, it
would be a thankless task, and is not necessary.

        It is essential for the mechanist thesis that the mechanical model
        of the mind shall operate according to ``mechanical principles,''
        that is, we can understand the operation of the whole in terms of
        the operation of its parts....

Certainly one expects that the behavior of physical objects can be explained
at any level of reduction.  However, consciousness is not necessarily a
behavior, it is an ``experience'', whatever that is.  Claims of
consciousness, as in ``I assert that I am conscious'' are behavior, and can
reasonably be subjected to a reductionist analysis.  But whether this will
shed any light on the nature of consciousness is unclear.  A useful analogy
is whether attacking a computer with a voltmeter will teach you anything
about the abstractions ``program'', ``data structure'', ``operating
system'', etc., which we use to describe the nature of what is going on
there.  These abstractions, which we claim are part of the nature of the
machine at the level we usually address it, are not useful when examining
the machine below a certain level of reduction.  But that is no paradox,
because these abstractions are not physical structure or behavior, they are
our conceptualizations of its structure and behavior.  This is as mystical
as I'm willing to get in my analysis, but look at what Lucas does with it:

        if the mechanist produces a machine which is so complicated that
        this [process of reductionist analysis] ceases to hold good of it,
        then it is no longer a machine for the purpose of our discussion,
        no matter how it was constructed.  We should say, rather, that he
        had created a mind, in the same sort of sense as we procreate
        people at prsent.

If someone produces a machine which which exhibits behavior that is
infeasible to predict through reductionist methods, there is nothing
fundamentally different about it.  It is still obeying the laws of physics
at all levels of its structure, and we can still in principle apply to it
any desired reductionist analysis.  We should certainly not claim to have
produced anything special (such as a mind) just because we can't easily
disprove the notion.

        When talking of [human beings and these specially complex machines]
        we should take care to stress that although what was created looked
        like a machine, it was not one really, because it was not just the
        total of its parts:  one could not even tell the limits of what it
        could do, for even when presented with the Goedel type question, it
        got the answer right.

There is simply no reason to believe that people can answer Goedelian
questions any better than machines can.  This bizarre notion that conscious
objects can do such things is unproven and dubious.  I assert that people
cannot do these things, and neither can machines, and that the ability to
escape from Goedel or Turing restrictions is irrelevant to questions of
consciousness, since we are (experientially) conscious but cannot do such
things.

I find that most current analyses of consciousness are either mystical like
the one I've addressed here, or simply miss the phenonmenon by attacking the
system at a level of reduction beneath the level where the concept seems to
apply.  It is tempting to thing we can make scientific statements about
consciousness just because we can experience consciousness ourselves.  This
idea runs aground when we find that this notion is dependent on capturing
scientifically the phenomena of ``experience'', ``consciousness'' or
``self'', which I have not yet seen adequately done.  Whether consciousness
is a phenomenon with scientific existence, or whether it is an abstract
creation of our conceptualizations with no external or reductionist
existence is still undetermined.

-Greg Davidson (davidson@sdcsvax.UUCP or davidson@nosc.ARPA)

------------------------------

End of AIList Digest
********************

∂07-May-84  1032	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #56
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 May 84  10:32:10 PDT
Date: Sun  6 May 1984 18:32-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #56
To: AIList@SRI-AI


AIList Digest             Monday, 7 May 1984       Volume 2 : Issue 56

Today's Topics:
  AI Software - Request for AI Demos,
  Seminars - Object-Oriented Programming in Prolog &
    SIGNUM & Learning in Production Systems & Nonmonotonic Reasoning,
  Conference - 12th POPL Call for Papers
----------------------------------------------------------------------

Date: 30 Apr 84 19:14:00-PDT (Mon)
From: ihnp4!inuxc!iuvax!wickart @ Ucb-Vax
Subject: Needed: AI demos
Article-I.D.: iuvax.3600001

I need some simplistic AI demo programs to help convert the infidels.
EMYCIN, ELIZA/DOCTOR, PARANOID, SHRDLU, and REVERSE would be greatly
appreciated. I can handle LISP, PASCAL, FORTRAN(in AI?), BAL(perish
the thought), C, or PL/I. Can anyone out there help out? USENET is the
only thing that maintains our existence in the USA.
   Thanks in advance,
T.F. Prune (aka Bill Wickart, ihnp4!inuxc!iuvax!wickart)

------------------------------

Date: Fri, 4 May 84 17:32:06 edt
From: jan@harvard (Jan Komorowski)
Subject: Seminar - Object-Oriented Programming in Prolog

             [Forwarded from the MIT bboard by SASW@MIT-MC.]

                 "Object-Oriented Programming in Prolog"

                            Carlo Zaniolo
                          Bell Laboratories

                         Monday, May 7, 1984
                              at 4 PM

              Aiken Lecture Hall, Harvard University
                  (tea in Pierce 213 at 3:30)


     Object-oriented programming has proved very useful in a number of
important applications, because of its ability to unify and simplify the
description of entities and their protocols. Here, we propose a similar
approach for providing this programming paradigm in Prolog.  We introduce
primitives to support the notions of (1) an object with its associated set of
methods, (2) an inheritance network whereby an object inherits the methods of
its ancestors, and (3) message passing between objects.

     Objects and methods are specified by a declaration object with
method←list, where object is a Prolog predicate and each method is an arbitrary
Prolog clause.  Then, a message O:M can be specified as a goal, to request the
application of method M to object O.  The inheritance network, specified by the
isa operator as follows sub←object isa object, is most useful in handling
default information.  Thus it is possible to specify a method that holds by
default for a general class, and then specify special subcases for which the
general rule is overridden.

     This new functionality is added on top of existing Prolog systems, with no
modification to its interpreter or compiler.

Host: H.J. Komorowski

------------------------------

Date: 28 Apr 84 5:13:41-PST (Sat)
From: decvax!genrad!mit-eddie!whuxle!floyd!cmcl2!lanl-a!unm-cvax!unmva
      x!stanly @ Ucb-Vax
Subject: Seminar - SIGNUM meeting and introduction
Article-I.D.: unmvax.312

SIGNUM is the Special Interest Group in Numerical Mathematics of the
ACM ( American Computing Machinery). The group meets monthly for
the academic year. At each meeting there is a talk on some subject
related to computing or applied mathematics. The talks are not
restricted to numerical stuff. If you would like to be on the mailing
list please send a note to John.  A correct address from unmvax is:

WISNIEWSKI@SANDIA.ARPA@lanl-a.UUCP

                                        Stan Steinberg
                                        stanly@unmvax

*******************************************************************

                  Rio Grande Chapter SIGNUM Meeting

Year end meeting and election of officers
Date: Tuesday, May 8, 1984
Speakers: Kathie Hiebert Dodd and Barry Marder - Sandia



Applied AI - "Brave New World" or "Catastrophe Theory Revisited"?
                          Barry Marder

Last year an effort was initiated at Sandia to develop a core of
expertise in the field of artificial intelligence.  One area of
investigation has been expert system technology, which has been
largely responsible for the present explosive growth of interest in
AI.  An expert system is a program that catalogs and makes readily
available expert knowledge in a field.  Such a system has been built
and implemented at Sandia to aid in the design of electrical cables
and connectors.  The speaker will describe this system and offer some
observations on artificial intelligence in general.


       VEHICLE IDENTIFICATION -- A FRAME BASED SYSTEM
                    Kathie Hiebert Dodd

Software has been developed that, when given certain characteristics
from a scene such as the location of wheels, can identify vehicles.
The image processing, ie extracting the characteristics from the
scene is still done primarily on a VAX.  Given the features a frame
based code using "flavors" in the Zetalisp language on a Symbolics
3600 does the vehicle identification.  The main emphasis of the talk
will be on the aspects of a frame based expert system, in particular
the use of "flavors" and "deamons".


Location: The Establishment - Albuquerque Dukes Sports Stadium
Price: 10.50 per person - serving Prime Rib (I think)
Social Hour : 5:30 P.M., Dinner: 6:00 P.M., Talks: 7:00 P.M.
PLEASE LET JOHN WISNIEWSKI KNOW BY NOON MONDAY THE 7TH IF YOU ARE
COMING TO DINNER.  If no answer leave a message with EVA 844-7747.

------------------------------

Date: 4 May 1984 1316-EDT
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Seminar - Learning in Production Systems

          [Forwarded from the CMU-AI bboard by Laws@SRI-AI.]

The AI seminar on May 8 will be given by John Holland of the University
of Michigan.

Title: Learning Algorithms for Production Systems


Learning, broadly interpreted to include processes such as induction, offers
attractive possibilities for increasing the flexibility of rule-based
systems. However, this potential is likely to be realized only when the
rule-based systems are designed ab initio with learning in mind.
In particular, there are substantial advantages to be gained when
the rules are organized in terms of building blocks suitable for
manipulation by the learning algorithms (taking advantage of the
principles expounded by Newell & Simon).  This seminar will concentrate on:

  1. Ways of inducing useful building blocks and rules from experience,
     and
  2. Learning algorithms that can exploit these possibilities through
     "apportionment of credit" and "recombination" of building blocks.

------------------------------

Date: Sat 5 May 84 18:45:28-PDT
From: Benjamin Grosof <GROSOF@SUMEX-AIM.ARPA>
Subject: Seminars - Nonmonotonic Reasoning

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

Our regular meeting time and place is Wednesdays 1-2pm (with some
runover to be expected), in Redwood Hall Room G-19.  [...]

Wednesday, May 16:

                Drawing A Line Around Circumscription

                          David Etherington
              University of British Columbia, Vancouver


   The Artificial Intelligence community has been very interested in
the study of reasoning in situations where only incomplete information
is available.  Predicate Circumscription and Domain Circumscription
provide tools for nonmonotonic reasoning in such situations.
However, not all of the problems which might be expected to yield to
circumscriptive inference are actually addressed by the techniques
which have been developed thus far.

   We outline some unexpected areas where existing techniques are
insufficient.


Wednesday, May 23

                DEFAULT REASONING AS CIRCUMSCRIPTION
         A Translation of Default Logic into Circumscription
          OR    Maximizing Defaults Is Minimizing Predicates

                     Benjamin Grosof of Stanford

Much default reasoning can be formulated as circumscriptive.  Using a revised
version [McCarthy 84] of circumscription [McCarthy 80], we propose a
translation scheme from default logic [Reiter 80] into circumscription.  An
arbitrary "normal" default theory is translated into a corresponding
circumscription of a first-order theory.  The method is extended to translating
"seminormal" default theories effectively, but is less satisfactorily concise
and elegant.

Providing a translation of seminormal default logic into circumscription
unifies two of the leading formal approaches to nonmonotonic reasoning, and
enables an integration of their demonstrated applications.  The naturalness
of default logic provides a specification tool for representing default
reasoning within the framework of circumscription.

------------------------------

Date: Fri, 4 May 84 15:52 PDT
From: Brian Reid <reid@Glacier>
Subject: 12th POPL Call for Papers

Call for Papers: 12th POPL

The twelfth annual ACM SIGACT-SIGPLAN symposium on
PRINCIPLES OF PROGRAMMING LANGUAGES

New Orleans, Louisiana, January 13-16, 1985

The POPL symposium is devoted to the principles of programming
languages. In recent years there have been many papers on
specific principles and specific programming languages embodying
those principles, which might lead one to believe that the symposium is
limited to papers on those topics.

We are eager for papers on important new topics, and therefore this
year we shall not attempt to prescribe particular topics. We
solicit papers that describe important new research results having
to do with the principles of programming languages. We not only
solicit, but seek and encourage, papers describing work in which an
implemented system embodies an important principle in such a way that
the usefulness of that principle can be better understood. All
submitted papers will be read by the program committee.

        Brian Reid, Stanford University (Program Chairman)
        Douglas Comer, Purdue University
        Stuart Feldman, Bell Communications Research
        Joseph Halpern, IBM Research
        David MacQueen, AT&T Bell Laboratories
        Michael O'Donnell, Johns Hopkins University
        Vaughan Pratt, Sun Microsystems and Stanford Univ.
        Guy Steele, Tartan Laboratories
        David Wall, DEC Western Research Laboratory

Please submit nine copies of a 6- to 10-page summary of your paper to
the program chairman. Summaries must be typed double spaced, or typeset
10 on 16. It is important to include specific results, and specific
comparisons with other work. The committee will consider the relevance,
clarity, originality, significance, and overall quality of each
summary. Mail to:

     Brian K. Reid
     Computer Systems Laboratory, ERL 444
     Department of Electrical Engineering
     Stanford University
     Stanford, California, 94305 U.S.A.

(Persons submitting papers from countries in which access to copying
machines is difficult or impossible are welcome to submit a single copy.)

Summaries must be received by the program chairman by August 3, 1984.
Authors will be notified of acceptance or rejection by September 25,
1984.  The accepted papers must be received in camera-ready form by the
program chairman at the above address by November 9, 1984. Authors of
accepted papers will be expected to sign a copyright release form.

Proceedings will be distributed at the symposium and will be
subsequently available for purchase from ACM. The local arrangements
chairman is Bill Greene, University of New Orleans, Computer Science
Department, New Orleans, Louisiana 70148

------------------------------

End of AIList Digest
********************

∂08-May-84  2210	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #57
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 May 84  22:08:47 PDT
Date: Tue  8 May 1984 21:05-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #57
To: AIList@SRI-AI


AIList Digest           Wednesday, 9 May 1984      Volume 2 : Issue 57

Today's Topics:
  AI Tools - Structure Editor Request,
  Bindings - Judea Pearl,
  AI Software - LISP on a Data General,
  Linguistics - Metaphors & Puns & Use of "and",
  AI Funding - The End of British AI?,
  AI Literature - Touretzky LISP Book Review,
  Consciousness - Discussion,
  Conference - IEEE Workstation Conference
----------------------------------------------------------------------

Date: 3 May 84 18:17:03-PDT (Thu)
From: hplabs!hao!cires!boulder!marty @ Ucb-Vax
Subject: wanted: display-oriented interlisp structure editor
Article-I.D.: boulder.175

I've been using Interlisp-VAX under VMS for a while now and am getting a bit
tired of the rather antiquated TTY editor.  I know Dave Barstow had a sort of
semi-display interlisp structure editor known as DED, but this seems to have
fallen into a black hole.  Does anyone out there have a screen-oriented
residential structure editor for interlisp?  (Yes, I know the real solution is
to get an 1108, it's on order ...  But I've got too many interlisp users to
point them all at one Dandelion ...)

                                thanks much,
                                 Marty Kent

csnet:
{ucbvax!hplabs | allegra!nbires | decvax!kpno | harpo!seismo | ihnp4!kpno}
                        !hao!boulder!marty
arpanet:
                        polson @ sumex-aim

------------------------------

Date: Mon, 7 May 84 07:54:13 PDT
From: Anna Gibbons <anna@UCLA-CS.ARPA>
Subject: Bindings - Judea Pearl Address

FROM JUDEA PEARL:  Please disregard the old address "UCLA-SECURITY",
any messages should be sent to  "judea@UCLA-CS.ARPA".

Sorry for the inconvenience and confusion.

------------------------------

Date: 19 Apr 84 14:30:21-PST (Thu)
From: decvax!mcnc!ecsvax!bet @ Ucb-Vax
Subject: Re: LISP on a Data General? (sri-arpa.122209)
Article-I.D.: ecsvax.2347

Here at Duke, someone ported a public domain implementation of an
extremely simple subset of LISP (xlisp) to our MV-8000. It suffices
for some robotics programming. I learned LISP on it. Sources in C.
Send me a note if you are interested; it is probably rather big to mail,
though I believe it was originally acquired from net.sources.
We can work out some way to transfer it.
                                Bennett Todd
                                ...{decvax,ihnp4,akgua}!mcnc!ecsvax!bet

------------------------------

Date: 2 May 84 8:35:51-PDT (Wed)
From: hplabs!tektronix!ogcvax!sequent!merlyn @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: sequent.478

> "Telephones are like grapefruits" is a SIMILE, not a metaphor. To be
> a metaphor, it would be "Telephones are grapefruits", and would be harder
> to interpret...
>
> Will

Ahh, but "Telephones are lemons" is fairly easy to interpret.
It just depends on the type of fruit. :-}

Randal L. ("life is like a banana") Schwartz, esq. (merlyn@sequent.UUCP)
        (Official legendary sorcerer of the 1984 Summer Olympics)
Sequent Computer Systems, Inc. (503)626-5700 (sequent = 1/quosine)
UUCP: ...!XXX!sequent!merlyn where XXX is one of:
        decwrl nsc ogcvax pur-ee rocks34 shell teneron unisoft vax135 verdix

P.S. I never metaphor I didn't like. (on a zero to four scale)

------------------------------

Date: 11 Apr 84 14:25:47-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!hes @ Ucb-Vax
Subject: what you see ain't what you get
Article-I.D.: ecsvax.2291

At the end of Bentley's column in the April CACM, he mentions the
AI seminar titled:
                      How to Wreck a Nice Beach
and I thought of that today when I saw a poster describing "Cole's Law".
For those unfamiliar with the concept it refers to
"niht decils egabbac" reversed.
--henry (almost ashamed to sign this) schaffer  ncsu  genetics

------------------------------

Date: 11 Apr 84 8:46:33-PST (Wed)
From: harpo!ulysses!burl!clyde!watmath!watrose!japlaice @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: watrose.6717

[For some reason this took almost a month to show up in the AIList
mailbox.  Other messages may have been similarly delayed.  -- KIL]

There are several philosophical problems with treating
`Indiana and Ohio' as a single entity.

The first is that the Fregean idea that the sense of
a sentence is based on the sense of its parts,
which is thought valid by most philosophers,
no longer holds true.

The second is that if we use that idea in this situation,
then it would probably seem reasonable to use
Quine's ideas for adjectives, namely that
`unicorn', `hairy unicorn', `small, hairy unicorn'
(or other similar examples) are all separate entities,
and I think that it is obvious that trying to
derive a reasonable semantic/syntactic theory for
any reasonable fragment of English would become
virtually imposible.

------------------------------

Date: Mon 7 May 84 17:31:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Re: The End of British AI?

Having learned recently of yet another attempt by the SERC to foist
upon unwilling AI researchers totally unsuitable equipment chosen for
narrow political reasons, and seeing those researchers wasting their
time fighting off the attempt or reimplementing AI software which they
could get with little effort if they had the right equipment, I am
maybe feeling a bit paranoid. Knowing of the contortions British
researchers are going through to get Alvey money doesn't make me very
optimistic either. Astronomers and high-energy physicists don't seem
to have the same problems.

While I was a graduate student and then a research fellow in the UK, I
had to waste my time fighting off two such attempts, again based on
narrow political considerations. That in the two cases the side I was
on ended up (partly) winning is small consolation when I think of the
time I and others could have used for more productive work.

As to bureaucratic statements of that kind being forgotten, do you
remember the Lighthill report, a ``statement'' that sent British AI
into internal exile for at least five years causing the drain of US of
British AI talent we all know about?

                                                - Fernando

------------------------------

Date: 7 May 84 0255 EDT
From: Dave.Touretzky@CMU-CS-A.ARPA
Subject: book announcement

Since people have begun using AIList to announce their latest books (an
excellent idea), I thought I'd briefly describe my new Lisp book.

  "Lisp:  A Gentle Introduction to Symbolic Computation",
  by David S. Touretzky, Harper & Row Publishers, Inc.,
  New York, 1984.  Softcover, 384 pages, $18.95 list.

I originally wrote the book because I wanted to teach an introductory
programming course to humanities students using Lisp.  Although most readers
of this mailing list are interested in the advanced applications of Lisp,
the language is an excellent one for beginners.  It turned out to be a heck
of a lot better for them than Pascal, which is what we teach most beginners
here at CMU.  And Stanford University's freshman programming course is now a
combination of Lisp and Pascal, with my book used for the Lisp component.
Trinity College, in Hartford, CT, uses it in a freshman AI seminar taught
by the Psych dept.  At CMU it was used for several semesters by the English
department (!)  for the programming component of a computer literacy course
for grad students.

Of course, the question you're all dying to ask is:  how does this book
differ from Winston & Horn, and from Wilensky's new book.  My book is the
only GENTLE introduction to Lisp.  As such, its pace is too slow for a
graduate level or advanced undergrad CS course, which is where I feel
Winston & Horn is most appropriate.  On the other hand, I know lots of grad
students in other departments, such as Psych, who found Winston & Horn too
advanced; they were more comfortable with my book.  Wilensky's book is a
wonderful reference for Franz Lisp, which is covered in its entirety,
while my book is based on MacLisp and Common Lisp (although there is an
appendix which mentions Franz) and covers only the basics of those dialects.

If you are an experienced programmer and want to know all about Franz Lisp,
Wilensky is the obvious choice.  On the other hand, if you're new to Lisp,
my book offers the easiest route to becoming fluent in the language.  In
addition to the gentle, easy-to-read style, it contains 75 pages of answers
to exercises.  (Winston & Horn has 60 pages of answers; Wilensky has none.)

  -- Dave Touretzky

------------------------------

Date: 3 May 84 16:55:34-PDT (Thu)
From: ihnp4!ihuxr!pem1a @ Ucb-Vax
Subject: Re: New topic for discussion
Article-I.D.: ihuxr.1064

Phaedrus' article made me think of a story in the book "The
Mind's Eye", by Hofstadter and Dennett, in which the relationship
between subjective experience and physical substance is explored.
Can't remember the story's name but good reading.  Some other
thoughts:

One aspect of experience and substance is how to determine when
a piece of substance is experiencing something.  This is good to
know because then you can fiddle with the substance until it stops
experiencing and thereby get an idea of what it was about the
substance which allowed it to experience.

The first reasonable choice for the piece of substance might be
yourself, since most people presume that they can tell when they
are having a conscious experience.  Unfortunately, being both the
measuree and measurer could have its drawbacks, since some experiments
could simulaneously zap both experience and the ability to know or not
know if an experience exists.  All sorts of problems here.  Could you
just THINK you were experiencing something, but not really?

What this calls for, it seems to me, is two people.  One to measure
and one to experience.  Of course this would all be based on the
assumption that it is even possible to measure such an elusive
thing as experience.  Some people might even object to the notion
that subjective experiences are possible at all.

The next thing is to choose an experience.
This is tricky.  If you chose self-awareness as the experience, then
you would have to decide if being self-aware in one state is the same
as being self-aware in a different state.  Can the experience be the
same even if the object of the experience is not?

Then, a measuring criterion would have to be established whereby
someone could measure if an experience was happening or not.  This
could range from body and facial expressions to neurological readings.
Another would be a Turing test-like setup:  put the subject into a
box with certain I/O channels, and have protocols made up for
measuring things.  This would allow you to REALLY get in there and
fiddle with things, like replacing body parts, etc.

These are some of the thoughts that ran through my head after reading
the Phaedrus article.  I think I thought them, and if I didn't, how
did this article get here?

                            Tom Portegys, Bell Labs, ihlpg!portegys

(ihlpg currently does not have netnews, that's why this is coming from
ihuxr).

------------------------------

Date: Sun 6 May 84 11:15:48-PDT
From: Dennis Allison <CSL.ALLISON@SU-SIERRA.ARPA>
Subject: IEEE Workstation Conference: Call for Papers


            -----------------------------------------------------
            1st International Conference on Computer Workstations
            -----------------------------------------------------

                    San Francisco Bay Area, May-June 1985.

                     Sponsored by: IEEE Computer Society

Computer Workstations are integral to productivity and quality increases, and
they are the main focal point for a growing fraction of professional activity.

A "workstation", broadly defined, is a system that interacts with a user to
help the user accomplish some kind of work.  Included in this definition are:
CAD systems,  high-resolution graphics systems, office productivity systems,
computer-based engineering support stations of all kinds, architectural sys-
tems, software engineering environments, etc.

"Workstations" includes both hardware and software.  Hardware to run the ap-
plications, software to customize the environments.

Technical Program

Papers are solicited from the technical community at large in a widely seen
series of advertisements.  Sessions to be organized from submitted papers and
from Program Committee contacts.

The technical program will have approximately 32 sessions, arranged in three
tracks, spanning 3 full days.  Technical sessions will be derived from submit-
ted papers and from Program Committee organized sessions.  The Program Commit-
tee will include leaders and important contributors to the field of computer
workstations.  International representation will be sought.

There will be an invited keynote speaker and a formal opening session, best
paper awards, and a set of pre-conference tutorials.  Also, a "Special Ad-
dress" on the 2nd day.

Exhibits

Over 150 "booths" are expected to be populated by nearly as many companies ex-
hibiting hardware and software pertaining to workstations of all kinds.  High
standards of technical exhibitions will be maintained by the IEEE to assure a
technically sophisticated and educational set of exhibits.  Wide international
participation is anticipated.

Exhibits are set up on Monday, shown Tuesday through Thursday from 10 AM to 7
PM, and dismantled on Friday.

                              Program Chairman:

                              Dr. Edward Miller
                           Software Research, Inc.
                              580 Market Street
                           San Francisco, CA  94104

                 Phone:  (415) 957-1441  --  Telex:  340 235

------------------------------

End of AIList Digest
********************

∂14-May-84  1803	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #58
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 May 84  18:02:50 PDT
Date: Mon 14 May 1984 17:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #58
To: AIList@SRI-AI


AIList Digest            Tuesday, 15 May 1984      Volume 2 : Issue 58

Today's Topics:
  AI Tools - Personal Computer Query,
  AI Books - LISPcraft Review,
  Humor - Distributed Intelligence,
  Linguistics - Metaphors,
  Job Market - Noncompetition Clauses,
  Seminar - Content-Addressable Memory,
  Conference - IEEE Knowledge-Based Systems Conference
----------------------------------------------------------------------

Date: 11 May 84 13:13:39-PDT (Fri)
From: decvax!wivax!apollo!tarbox @ Ucb-Vax
Subject: LISP machines question
Article-I.D.: apollo.1f4a00cb.d4d

Can anyone out there tell me what the smallest,
(ie. least expensive) home/personal computer is
that run some sort of LISP?

                                -- Brian Tarbox  @APOLLO

------------------------------
Date: Wed, 9 May 84 17:36:35 pdt
From: wilensky%ucbdali@Berkeley (Robert Wilensky)
Subject: AIList book announcement


I want to dispel an incorrect impression left by Dave Touretzky about my
recent book on LISP (which, incidentally, is called

     LISPcraft
     by Robert Wilensky
     W. W. Norton & Co.
     New York, 1984.  Softcover, 385 pages, $19.95 list.  )

Specifically, Touretzky gave the impression the my book was geared to
advanced Franz LISP programming, and was not appropriate as a general
tutorial for the novice.  Nothing could be further from the truth.
LISPcraft is NOT meant to be primarily a reference for Franz LISP, nor is it
intended as an advanced LISP text.  Rather, the book is meant to be a
self-contained LISP tutorial for the novice LISP programmer.

LISPcraft does assume some familiarity with computers, so it may not be
ideal for the computationally illiterate.  On the other hand, like
Touretzky's book, and unlike Winston and Horn's, almost the entire length of
my book is a tutorial on various aspects of the language.

From my point of view, the primary difference between these books is that I
try to cover the language from the programmer's point of view.  This means
that I pay homage to the way LISP programmers actually use the language.  As
a consequence, I spend some time on features of LISP that one hardly finds
discussed anywhere, e. g., programming idioms, macro writing techniques,
read macros, debugging, error handling, non-standard flow of control, the
oblist, non-s-expression data types, systems functions, compilation, and
aspects of I/O.  I also give some serious programming examples (pattern
matching and deductive data base management).  However, my book starts at
ground zero, and works its way through the basics.  In fact, the text is
about evenly divided between the sort of issues listed above and more basic
``car-cdr-cons'' level stuff.  Most importantly, the text is entirely
tutorial in nature and presumes no previous knowledge of LISP whatsoever.  I
believe that basics of LISP programming are presented to the uninitiated as
well here as they are anywhere.

In sum, LISPcraft contains a more extensive exposition of LISP than either
Winston's or Touretzky's book.  Winston's book contains many more examples of
LISP programs than does LISPcraft, and Touretzky's book covers less material
at a slower pace.

As Touretzky states, LISPcraft does contain a thorough exposition of a
particular LISP dialect, namely Franz.  For example, the book contains an
appendix that describes all Franz LISP functions.  However, most of the book
is rather dialect independent, and major idiosyncracies are noted

throughout.  The point of the thoroughness is to suggest a repetoire
of functions that programmers actually use, i. e., to convey what a real
LISP language looks like, aside from serving as a reference for Franz users
per se.  As I suggest in my preface, I believe ``it is easier to learn a new
dialect having mastered another than it is having learned a language for
which there are no native speakers.''

I take strong exception to Touretzky's claim that his book offers the
``easiest route to becoming fluent in the language.''  Besides my belief in
the appropriateness of my own book for the novice, I wish to point out that
memorizing a German grammar book does NOT make one fluent in German.  There
is a large body of other knowledge that is crucial to using a language
effectively, be that language natural or artificial.  This fact was a prime
motivation behind my writing LISPcraft in the first place.

Rather than make the claim that my own book provides the best route to
fluency, or argue its merits as an introductory LISP text, I invite the
interested reader to judge for his or herself.

------------------------------

Date: 2 May 84 19:45:13-PDT (Wed)
From: ihnp4!oddjob!jeff @ Ucb-Vax
Subject: Re: Proposal for UUCP Project
Article-I.D.: oddjob.172

        Do you suppose that when enough connections are made, the
UUCP network will spontaneously develop intelligence?


                            Jeff Bishop    || University of Chicago
                      ...ihnp4!oddjob!jeff || Astrology & Astrophysics Center

------------------------------

Date: 4 May 84 18:54:17-PDT (Fri)
From: hplabs!tektronix!ogcvax!sequent!richard @ Ucb-Vax
Subject: Re: Proposal for UUCP Project
Article-I.D.: sequent.483

    Do you suppose that when enough connections are made, the UUCP
    network will spontaneously develop intelligence?

Perhaps it already has.  Maybe that's what keeps eating all those
first lines, and regurgitating the weeks-old news.

             ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
The preceding should not be construed as the statement or opinion of the
employers or associates of the author.   It might not even be the author's.

I try to make a point of protecting the innocent,
        but none of them can be found...
                                                        ...!sequent!richard

------------------------------

Date: 11 May 84 19:27:29-PDT (Fri)
From: decvax!minow @ Ucb-Vax
Subject: Re: Proposal for UUCP Project
Article-I.D.: decvax.482

An earlier discussion of this topic may be found in the story
"Inflexible Logic" by Russell Maloney (The New Yorker, 1940)
reprinted in The World of Mathematics, Vol. 4, pp. 2262-2267.

Martin Minow
decvax!minow

------------------------------

Date: 7 May 84 11:02:00-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Re: metaphors - (nf)
Article-I.D.: uicsl.15500034

FLAME ON

Your complaint that a comparison using "like" is a simile (and not a
metaphor) is technically correct.  But it shows that you're not
following the research.  Metaphor or simile (or juxtaposition, etc.),
these figures of speech raise the same problems and questions of how
analogical reasoning works, how comparisons convey meaning, how do
people dream them up, and how do other people understand them.  For
this reason the word metaphor is used to refer collectively to the
whole lot of them.  Pretending you're a high school English teacher
doesn't help.

FLAME OFF

------------------------------

Date: 10 May 84 21:16:05-PDT (Thu)
From: decvax!genrad!wjh12!foxvax1!brunix!jah @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: brunix.7927

You should be aware that it is not necessarily the case that you MUST
sign the non-disclosure agreement exactly as worded.  I recently signed
on as consultant with a company which had a very stringent (and absolutely
ridiculous) nondisclosure/non-competition clause form.  I refused to sign
certain sections (mainly those limiting me from practicing AI, consulting

for others where there was no conflict of interest, etc.)  We eventually
eliminated those clauses, rewrote the contract and I signed willingly.

Similarly, another company I worked for was unwilling to change the document,
but, when I refused to sign away my rights, they pointed out that I got
to fill in a section with information about what things I already had going
for me (that is, what things I had done previously so the company had no claim
on these things).  Since the company's contract included such things as
"no competing business" and the like, I was able to claim prior rights to
"artificial intelligence research", "natural language processing", and
"expert systems research."  The very vagueness of these things, according
to my legal advisor, makes it that much harder for the company to really do
anything.

A final note, most companies will clain they do do this "as red tape" and
will "not really hassle you."  Don't believe them!  They've got more bucks
then you and if it goes to court, EVEN IF YOU WIN, it will cost you more
than you can afford.  Speak to a lawyer, change contracts, etc.  In the AI
world we've got a seller's market.  Take advantage of it, these companies
want you, and will be willing to negotiate.

  Sorry if I do go on...
  Jim Hendler

------------------------------

Date: Wed 9 May 84 18:08:03-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Content-Addressable Memory

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

                        FOR THE RECORD

CSLI post-doctoral fellow Pentti Kanerva was a guest lecturer at MIT
Tuesday, May 1. The topic of his lecture was "Random-access Memory with a
Very Large Address Space (1 2000) as a Model of Human Memory: Theory and
Implementation." Douglas R. Hofstadter was host. Following is an abstract
of the lecture.

Humans can retrieve information from memory according to content (recalling
and recognizing previously encountered objects) and according to temporal
sequence (performing a learned sequence of actions). Retrieval times
indicate the direct retrieval of stored information.

In the present theory, memory items are represented by n-bit binary words
(points of space {0,1}n. The unifying principle of the theory is that the
address space and the datum space of the memory are the same. As in the
conventional random-access memory of a computer, any stored item can be
accessed directly by addressing the location in which the item is stored;
the sequential retrieval is accomplished by storing the memory record as
a linked list. Unlike in the conventional random-access memory, many
locations are accessed at once, and this accounts for recognition.

Three main results have been obtained: (1) The properties of neurons allow
their use as address decoders for a generalized random-access memory;
(2) distributing the storage of an item in a set of locations makes very
large address spaces (2 1000) practical; and (3) structures similar to those
suggested by the theory are found in the cerrebellum.

------------------------------

Date: 11 May 1984 07:08:26-EDT
From: Mark.Fox@CMU-RI-ISL1
Subject: IEEE AI Conf. Call for Papers

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

                                CALL FOR PAPERS

            IEEE Workshop on Principles of Knowledge-Based Systems

          Sheraton Denver Tex, Denver, Colorado, 3 - 4 December 1984

Purpose:

The  purpose of this conference is to focus attention on the principle theories
and methods of artificial intelligence which have played an important  role  in
the  construction  of  expert  and  knowledge-based systems.  The workshop will
provide a forum for  researchers  in  expert  and  knowledge-based  systems  to
discuss the concepts which underly their systems.  Topics include:

   - Knowledge Acquisition.
        * manual elicitation.
        * machine learning.
   - Knowledge Representation.
   - Causal modeling.
   - The Role of Planning in Expert Reasoning
   - Knowledge Utilization.
        * rule-based reasoning
        * theories of evidence
        * focus of attention.
   - Explanation.
   - Validation.
        * measures.
        * user acceptance.

Please  send  eight  copies of a 1000-2000 word double-space, typed, summary of
the proposed paper to:
               Mark S. Fox
               Robotics Institute
               Carnegie-Mellon University
               Pittsburgh, Pennsylvania 15213

All submissions will be read by the program committee:
   - Richard Duda, Syntelligence
   - Mark Fox, Carnegie-Mellon University
   - John McDermott, Carnegie-Mellon University
   - Tom Mitchell, Rutgers University
   - John Roach, Virginia Polytechnical Institute
   - Reid Smith, Schlumberger Corp.
   - Mark Stefik, Xerox Parc
   - Donald Waterman, Rand Corp.

Summaries are to focus primarily on new principles, but each  principle  should
be  illustrated  by  its  use in an knowledge-based system.  It is important to
include specific findings or results, and specific  comparisons  with  relevant
previous  work.    The  committee  will  consider the appropriateness, clarity,
originality, significance and overall quality of each summary.

June 7, 1984 is the deadline for the submission of summaries.  Authors will  be
notified of acceptance or rejection by July 23, 1984.  The accepted papers must
be  typed  on  special  forms and received by the program chairman at the above
address by September 3, 1984.  Authors of accepted papers will be  expected  to
sign a copyright release form.

Proceedings  will  be  distributed  at  the  workshop  and will be subsequently
available for purchase from IEEE.  Selected  full  papers  will  be  considered
(along  with  papers  from  the  IEEE  Conference on AI and Applications) for a
special issue of IEEE PAMI on knowledge-based systems to be published in  Sept.
1985.  The deadline for submission of full papers is 16 December 1984.


                               General Chairman

                         John Roach
                         Dept. of Computer Science
                         Virginia Polytechnic Institute
                         Blacksburg, VA



                              Program Co-Chairmen

     Mark S. Fox                             Tom Mitchell
     Robotics Institute                      Dept. of Computer Science
     Carnegie-Mellon Univ.                   Rutgers University
     Pittsburgh, PA                          New Brunswick, NJ

     Registration Chairman              Local Arrangements Chairman
     Daniel Chester                          David Morgenthaler
     Dept. of Computer Science               Martin Marietta Corp.
     University of Delaware                  Denver, Colorado
     Newark, Delaware

------------------------------

End of AIList Digest
********************

∂20-May-84  2349	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #59
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 May 84  23:48:19 PDT
Date: Sun 20 May 1984 22:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #59
To: AIList@SRI-AI


AIList Digest            Sunday, 20 May 1984       Volume 2 : Issue 59

Today's Topics:
  Metaphysics - Perception, Recognition, Essence, and Identity
----------------------------------------------------------------------

Date: 15 May 84 23:33:31-PDT (Tue)
From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax
Subject: A topic for discussion, phil/ai persons.
Article-I.D.: wxlvax.277

Here is a thought which a friend and I have been kicking around for a while
(the friend is a professor of philosophy at Penn):

It seems that it is IMPOSSIBLE to ever build a computer that can truly
perceive as a human being does, unless we radically change our ideas
about how perception is carried out.

The reason for this is that we humans have very little difficulty
identifying objects as the same across time, even when all the features of
that object change (including temporal and spatial ones).  Computers,
on the other hand, are being built to identify objects by feature-sets.  But
no set of features is ever enough to assure cross-time identification of
objects.

I accept that this idea may be completely wrong.  As I said, it's just
something that we have been batting around.  Now I would like to solicit
opinions of others.  All ideas will be considered.  All references to
literature will be appreciated.  Feel free to reply by mail or on the net.
Just be aware that I don't log on very often, so if I don't answer for a
while, I'm not snubbing you.

--Alan Wexelblat (for himself and Izchak Miller)
(currently appearing at: ...decvax!ittvax!wlxvax!rlw  Please put "For Alan" in
all mail headers.)

------------------------------

Date: 15 May 84 14:49:41-PDT (Tue)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: A topic for discussion, phil/ai persons.
Article-I.D.: ariel.630

The computer needs to be able to distinguish between "metaphysically identical"
and "essentially the same".  This distinction is at the root of an old (2500
years?) Greek ship problem:
Regarding Greeks ship problem: When a worn board is replaced by a new board,
the ship is changed, but it is the same ship.  The difference leaves the
ship essentially the same but not identically the same.  If all the boards
of a ship are replaced one by one until the ship is entirely redone with new
boards, it is still the same ship (essentially).  Now, if all the old boards
that had been removed were put together again in their original configuration
so as to duplicate the new-board ship, would the new old-board ship be iden-
tically or essentially the same as the original old-board ship? Assume nailless
construction techniques were used thruout, and assume all boards always fit
perfectly the same way every time.

We now have two ships that are essentially the same as the original ship,
but, I maintain, neither ship is identical to the original ship.  The original
ship's identity was not preserved, although its identity was left sufficiently
unchanged so as to preserve the ship's essence.  The ship put together with
the previously-removed old boards is not identically the same as the original
old-board ship either, no matter how carefully it is put together.  It too is
only essentially the same as the original ship.

A colleague suggested that 'essence' in this case was contextual, and I
tend to agree with him.

Actually, even if the Greeks left the original ship alone, the ship's identity
would change from one instant to the next.  Even while remaining essentially
the same, the fact that the ship exists in the context of (and in relation to)
a changing universe is enough to vary the ship's identity from moment to mo-
ment.  The constant changes in the ship's characteristics are admittedly very
subtle, and do not change the essential capacity/functionality/identity of the
ship.  Minute changes in a ships identity have 'essentially' no impact.  Only
a change sufficiently large (such as a small hole in the hull) have an
essential impact.

"Essence" has historically been considered metaphysical.  In her "Introduction
to Objectivist Epistemology" (see your local bookstore) Ayn Rand identified
essence as epistemological rather than metaphysical.  The implications of this
identification are profound, and more than I want to get into in this article.
Philosopher Leonard Peikoff's article "The Analytic-Synthetic Dichotomy", in
the back of the newer editions of Rand's Intro to Obj Epist, shows how crucial
the distinction between essence-as-metaphysical and essence-as-epistemological
really is.
Read Rand's book and see why the computer would have to make the same distinc-
tion.  That distinction, however, has to be made on the CONCEPTUAL level.  I
think Rand's discussion of concept-formation will probably convince you that
it will be quite some time before man-made machinery is up to that...
Norm Andrews, AT+T Information Systems (201)834-3685 vax135!ariel!norm

------------------------------

Date: 16 May 84 7:10:40-PDT (Wed)
From: hplabs!hao!seismo!rochester!rocksvax!sunybcs!gloria!rosen @ Ucb-Vax
Subject: Re: A topic for discussion, phil/ai persons.
Article-I.D.: gloria.176

Just a few quick comments,
1)  The author seems to use perceive as visual perception.  It can not
be a prerequisite for intelligence due to all the counter examples in
the human race. Not every human has sight, so we should be able to get
intelligence from various types of inputs.

2)  Since humans CAN do it is the evidence that OTHER systems can do it.

3)  The major assumption is that the only way a computer can identify objects
is by having static "feature-sets" that are from the object alone, without
having additional information, but why have that restriction?  First,
all features don't change at once, your grandmother doesn't all-
of-a-sudden have the features of a desk.  Second, the processor can/must
change with the enviornment as well as the object in question.
Third, the context plays a very important role in the recognition of
of an object.  Functionality of the object is cruical.  Remindings from
previous interactions with that object, and so on.  The point is that
clearly a static list of what features objects must have and what features
are optional is not enough.  Yet there is no reason to believe that
this is the only way computers can represent objects.  The points
here come from many sources, and have their origin from such people
as Marvin Minsky and Roger Schank among others.  There is a lot of
literature out there.

------------------------------

Date: 16 May 84 9:50:24-PDT (Wed)
From: hplabs!hao!seismo!rochester!ritcv!ccieng5!ccieng2!bwm @ Ucb-Vax
Subject: Re: Essence
Article-I.D.: ccieng2.179

I don't think ANYONE is looking to build a computer that can understand
phiolosophy. If I can build something that acts the same as an IQ-80 person,
I would be happy. This involves a surprising amount of work, (like vision,
language, etc.) but could certainly be confused by two 'identical' ships
as could I. Just because A human can do something does not imply that our
immediate AI goals should include it. Rather, first lets worry about things
ALL humans can do.

Brad Miller

...[cbrma, rlgvax, ritcv]!ccieng5!ccieng2!bwm

------------------------------

Date: 17 May 84 7:04:41-PDT (Thu)
From: ihnp4!houxm!hocda!hou3c!burl!ulysses!unc!mcnc!ecsvax!emigh @
      Ucb-Vax
Subject: Re: the Greek Ship problem
Article-I.D.: ecsvax.2511

  This reminds me of the story of Lincoln's axe (sorry, I've forgotten the
source).  A farmer was showing a visitor Lincoln's axe:
Visitor:        Are you sure that's Lincoln's axe

Farmer:         It's Lincoln's axe.  Of course I've had to replace the handle
                three times and the head once, but it's Lincoln's axe alright.

Adds another level of reality to the Greek Ship Problem.

Ted H. Emigh     Genetics and Statistics, North Carolina State U, Raleigh  NC
USENET: {akgua decvax duke ihnp4 unc}!mcnc!ecsvax!emigh
ARPA:   ecsvax!emigh@Mcnc or decvax!mcnc!ecsvax!emigh@BERKELEY

------------------------------

Date: 16 May 84 15:20:19-PDT (Wed)
From: ihnp4!drutx!houxe!hogpc!houti!ariel!vax135!floyd!cmcl2!seismo!ro
      chester!rocksvax!sunybcs!gloria!colonel @ Ucb-Vax
Subject: Re: the Greek Ship problem
Article-I.D.: gloria.178

This is a good example of the principle that it depends on who's
doing the perceiving.  To a barnacle, it's a whole new ship.

Col. G. L. Sicherman
...seismo!rochester!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 16 May 84 15:17:06-PDT (Wed)
From: harpo!seismo!rochester!rocksvax!sunybcs!gloria!colonel @ Ucb-Vax
Subject: Re: Can computers perceive
Article-I.D.: gloria.177

If by "perception" you imply "recognition", then of course computers
cannot perceive as we can.  You can recognize only what is meaningful
to you, and that probably won't be meaningful to a computer.

Col. G. L. Sicherman
...seismo!rochester!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 16 May 84 10:57:00-PDT (Wed)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: A topic for discussion, phil/ai pers - (nf)
Article-I.D.: uiucdcs.32300026

The problem is one of identification. When we see one object matching a
description of another object we know about, we often assume that the object
we're seeing IS the object we know about -- especially when we expect the
description to be definite [1]. This is known as Leibniz's law of the
indiscernability of identicals. That's found its way into the definitions
of set theory [2]: two entities are "equal" iff every property of one is also
a property of the other. Wittgenstein [3] objected that this did not allow for
replication, ie the fact that we can distinguish two indistinguishable objects
when they are placed next to each other (identity "solo numero"). So, if we
don't like to make assumptions, either no two objects are ever the same object,
or else we have to follow Aristotle and say that every object has some property
setting it apart from all others. That's known as Essentialism, and is hotly
disputed [4]. The choices until now have been: breakdown of identification,
essentialism, or assumption. The latter is the most functional, but not nice
if you're after epistemic certainty.
        Still, I see no insurmountable problems with making computers do the
same as ourselves: assume identity until given evidence to the contrary. That
we can't convince ourselves of that method's epistemic soundness does nothing
to its effectiveness. All one needs is a formal logic or set theory (open
sentences, such as predicates, are descriptions) with a definite description
operator [2,5]. Of course, that makes the logic non-monotonic, since a definite
description becomes meaningless when two objects match it. In other words, a
closed-world assumption is also involved, and the theory must go beyond first-
order logic. That's a technical problem, not necessarily an unsolvable one [6].


[1] see the chapter on SCHOLAR in Bobrow's "Representation and Understanding";
    note the "uniqueness assumption".
[2] Introduced by Whitehead & Russell in their "Principia Mathematica".
[3] Wittgenstein's "Tractatus".
[4] WVO Quine, "From a logical point of view".
[5] WVO Quine, "Mathematical Logic".
[6] Doyle's Truth Maintenance System (Artif. Intel. 12) attacks the non-
    monotonicity problem fairly well, though without a sound theoretical
    basis. See also McDermott's attempt at formalization (Artif. Intel. 13
    and JACM 29 (Jan '82)).

                                        Marcel Schoppers
                                        U of Illinois at Urbana-Champaign
                                        uiucdcs!marcel

------------------------------

End of AIList Digest
********************

∂21-May-84  0044	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #60
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 May 84  00:43:32 PDT
Date: Sun 20 May 1984 22:43-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #60
To: AIList@SRI-AI


AIList Digest            Monday, 21 May 1984       Volume 2 : Issue 60

Today's Topics:
  AI Literature - Artificial Intelligence Abstracts,
  Survey - Summary on AI for Business,
  AI Tools - LISP on PCs & Boyer-Moore Prover on VAXen and SUNs,
  Games - Core War Software,
  AI Tools - Display-Oriented LISP Editors
----------------------------------------------------------------------

Date: Sun 20 May 84 14:10:16-EDT
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Artificial Intelligence Abstracts

   Does anyone else on this list wish, as I do, that there
existed a publication entitled ARTIFICIAL INTELLIGENCE
ABSTRACTS? The field of artificial intelligence is probably the
supreme interdisciplinary sphere of activity in the world, and
its vital concerns extend across the spectrum of computer
science, philosophy, psychology, biology, mathematics, literary
theory, linguistics, statistics, electrical engineering,
mechanical engineering, etc.

   I wonder if one of the major member publishers of the NFAIS
(National Federation of Abstracting & Indexing Services) could
be convinced to undertake the publication of a monthly
reference serial which would reprint from the following
abstracting services those abstracts which bear most
pertinently on the concerns of AI research:

   Biological Abstracts / Computer & Control Abstracts /
Computer & Information Systems Abstracts Journal /  Current
Index to Journals in Education / Dissertation Abstracts
International / Electrical & Electronics Abstracts /
Electronics & Communications Abstracts Journal / Engineering
Index / Government Reports Announcements and Index /
Informatics Abstracts / Information Science Abstracts /
International Abstracts in Operations Research / Language and
Language Behavior Abstracts / Library & Information Science
Abstracts / Mathematical Reviews / Philosopher's Index / PROMT
/ Psychological Abstracts / Resources in Education /  (This is
by no means a comprehensive list of relevant reference
publications.)

   Would other people on the list find an abstracting service
dedicated to AI useful? Perhaps an initial step in developing
such a project would be to arrive at a consensus regarding what
structure of research fronts/subject headings appropriately
defines the field of AI.

  --Wayne McGuire

------------------------------

Date: Fri, 18 May 84 15:29:35 pdt
From: syming%B.CC@Berkeley
Subject: Summary on AI for Business

This is the summary of the responses to my request about "AI for Business" one
month ago on AIList Digest.

Three organizations are working on this area. They are Syntelligence, SRI, and
Arthur D. Little, Inc..

Syntelligence's objective is to bring intelligent computer systems for
business. Currently the major work is in finance area. The person to contact is:
Peter Hart, President, 800 Oak Grove Ave, Suite 201, Menlo Park, CA 94025.
            (415) 325-9339, <HART@SRI-AI.ARPA>

SRI has a sub-organization called Financial Expert System Program headed by
Sandra Cook, (415) 859-5478. A prototype system for a financial application
has been constructed.  <SANDRA@SRI-KL.ARPA>

Arthur D. Little are developing AI-based MRP, financial planning, strategic
planning and marketing system. However, I do not have much information yet.
The person to contact with is Tom Martin.  <TJMartin@MIT-MULTICS.ARPA>
The Director of AI at Arthur D. Little, Karl M. Wiig, gave an interesting
talk on "Will Artificial Intelligence Provide The Rebirth of Operations
Research?" at TIMS/ORSA Joint National Meeting in San Francisco on May 16.
In his talk, a few projects in ADL are mentioned. If interested, write to
35/48 Acorn Park, Cambridge, MA 01240.

Gerhard Friedrich of DEC also gave a talk about expert systems on TIMS/ORSA
meeting on Tuesday. He mentioned XSEL for sales, XCON for engineering, ISA,
IMACS and IBUS for manufacturing and XSITE for customer services. XCON is
successor of R1, which is well known. XSEL was published in Machine Intelligence
Vol.10. However, I do not know the references for the rest. If you know, please
inform me.

The interests on AI in Business community is just started. TIMS is probably the
first business professional society who will form a interest group on AI. If
interested, please write to W. W. Abendroth, P.O. Box 641, Berwyn, PA 19312.


The people who have responsed to my request and shown interests are:
         ---------------------------------------------------
SAL@COLUMBIA-20.ARPA
DB@MIT-XX.ARPA
Henning.ES@Xerox.ARPA
brand%MIT-OZ@MIT-MC.ARPA
NEWLIN%upenn.csnet@csnet-relay.arpa
shliu%ucbernie@Berkeley.ARPA
klein%ucbmerlin@Berkeley.ARPA
david%ucbmedea@Berkeley.ARPA
nigel%ucbernie@Berkeley.ARPA
norman%ucbernie@Berkeley.ARPA
meafar%B.CC@Berkeley.ARPA
maslev%B.CC@Berkeley.ARPA
edfri%B.CC@Berkeley.ARPA
        ------------------------------------------------------

Please inform me if I made any mistake on above statements. Keep in touch.

syming hwang, syming%B.CC@Berkeley.ARPA, (415) 642-2070,
              350 Barrows Hall, School of Business Administration,
              U.C. Berkeley, Berkeley, CA 94720

------------------------------

Date: Tue, 15 May 84 10:25 EST
From: Kurt Godden <godden%gmr.csnet@csnet-relay.arpa>
Subject: LISP machines question

To my knowledge, the least expensive PC that runs LISP is the Atari.
Sometime during the past year I read a review in Creative Computing of
an Interlisp subset that runs on the Atari family.  The reviewer was
Kenneth Litkowski and his overall impression of the product was favorable.
 -Kurt Godden
  General Motors Research Labs

------------------------------

Date: 14-May-84 23:07:56-PDT
From: jbn@FORD-WDL1.ARPA
Subject: Boyer-Moore prover on VAXen and SUNs

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

     For all theorem proving fans, the Boyer-Moore Theorem Prover has now
been ported to VAXen and SUNs running 4.2BSD Unix.  Boyer and Moore ported
it from TOPS-20 to the Symbolics 3600; I ported it from the 3600 to the
VAX 11/780, and it worked on the SUN the first time.  Vaughn Pratt has
a copy.  Performance on a SUN 2 is 57% of a VAX 11/780; this is quite
impressive for a micro.
     Now when a Mac comes out with some real memory...

                                Nagle (@SCORE)

------------------------------

Date: Sunday, 20 May 1984 23:23:30 EDT
From: Michael.Mauldin@cmu-cs-cad.arpa
Subject: Core War

[The Scientific American article referred to below is an entertaining
description of software entities that crawl or hop through an address
space trying to destroy other such entities and to protect themselves
against similar depredations.  Very simple entities are easy to protect
against or to destroy, but are difficult to find.  Complex entities
(liveware?) have to be able to repair themselves more quickly than
primitive entities can eat away at them.  This leads to such oddities
as a redundant organism that switches its consciousness between bodies
after verifying that the next body has not yet been corrupted.  -- KIL]


If anybody is interested in the May Scientific American's Computer
Recreations article, you may also be interested in getting a copy
of the CMU version of the Redcode assembler and Mars interpreter.

I have written a battle program which has some interesting implications
for the game.  The program 'mortar' uses the Fibonacci sequence to
generate a pseudo-random series of attacks.  The program spends 40% of
its time shooting at other programs, and finally kills itself after
12,183 cycles.  Before that time it writes to 53% of memory and is
guaranteed to hit any stationary program larger than 10 instructions.

Since the attacks are random, a program which relocates itself has no
reason to hope that the new location is any safer than the old one.
Some very simplistic mathematical analysis indicates that while
Dwarf should kill Mortar 60% of the time (this has been verified
empirically), no non-repairing program of size 10 or larger can beat
Mortar.  Furthermore, no self-repairing program of size 141 can beat
Mortar.  I believe that this last result can be tightened significantly,
but I haven't looked at it too long yet.  I haven't written this up,
but I might be cajoled into doing so if many people are interested.
I would very much like to see some others veryify/correct these results.

========================================================================
Access information:
========================================================================

    The following Unix programs are available:
        mars -  A redcode simulator, written by Michael Mauldin
        redcode - A redcode assembler, written by Paul Milazzo

    Battle programs available:
        dwarf, gemini, imp, mortar, statue.

Userid "ftpguest" with password "cmunix" on the "CMU-CS-G" VAX has access
to the Mars source. The following files are available:

   mlm/rgm/marsfile             ; Single file (shell script)
   mlm/rgm/srcmars/*            ; Source directory

Users who cannot use FTP to snarf copies should send mail requesting
that the source be mailed to them.
========================================================================

Michael Mauldin (Fuzzy)
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA  15213
(413) 578-3065,  mauldin@cmu-cs-a.

------------------------------

Date: 11 May 84 7:00:35-PDT (Fri)
From: hplabs!hao!seismo!cmcl2!lanl-a!cib @ Ucb-Vax
Subject: Re: wanted: display-oriented interlisp structure editor
Article-I.D.: lanl-a.7072

Our system is ISI-Interlisp on a UNIX VAX, and I normally use
emacs to edit Interlisp code. emacs can be called with the
LISPUSERS/TEXTEDIT program. It needs a minor patch to be able
to handle files with extensions. I can give further details by
mail if you are interested.

------------------------------

Date: 8 May 84 13:32:00-PDT (Tue)
From: pur-ee!uiucdcs!uicsl!ashwin @ Ucb-Vax
Subject: Re: wanted: display-oriented interlisp s - (nf)
Article-I.D.: uicsl.15500035

We use the LED editor which runs in InterLisp-VAX under UNIX.  It's no DEDIT
but is better than the TTY editor.  We have the source which should make it
pretty easy to set up on your system.  I have no idea about copyright laws
etc., but I suppose I could mail it to you if you want it.  Here's a write-up
on LED  (from <LISPUSERS>LED.TTY):

     ------------------------------------------------------------


LED             -- A display oriented extension to Interlisp's editor
                -- for ordinary terminals.

        LED is an add on to the standard Interlisp editor, which
maintains a context display continuously while editing.  Other than
the automatically maintained display, the editor is unchanged except
for the addition of a few useful macros.


  HOW TO USE
  ----------

        load the file (see below)
        possibly set screen control parameters to non-default values
        edit normally

also:   see documentation for SCREENOP to get LED to recognise your
        terminal type.

  THE DISPLAY
  -----------

  Each line of the context display represents a level of the list
structure you are editing, printed with PRINTLEVEL set to 0, 1, 2 or 3.
Highlighting is used to indicate the area on each line that is represented
on the line below, so you can thread your eye upward through successive
layers of code.

   Normally, the top line of the screen displays the top level of the
edit chain, the second line displays the second level and so on.  For
expressions deeper than LEDLINES levels, the top lines is the message:
                (nnn more cars above)
and the next LEDLINES of the screen correspond to the BOTTOM levels
of the edit chain.  When the edit chain does become longer than
LEDLINES, the display is truncated in steps of LEDLINES/2 lines, so
for example if LEDLINES=20 (the default) and your edit chain is 35
levels deep, the lisplay will be (20 more cars above) followed by
15 lines of context display representing the 20'th through 35'th
levels of the edit chain.

  Each line, representing some level of the edit chain, is printed
such that it fits entirely on one screen line.  Three methods are
used to accomplish the shortening of the printed representation:
        Replacing comments with (*)
        Setting PRINTLEVEL to a smaller value,
                 which changes expressions into ampersands
        Truncting the leading and/or trailing expressions
                 around the attention point.

   If the whole expression can't be printed, replacing comments is
tried first.  If still to large, truncation is tried if the current
printlevel is >= LEDTLEV.  Otherwise the whole process is restarted
with a smaller PRINTLEVEL.
   The choice of LEDTLEV effectively chooses between seeing more detail
or seeing more forms.

   The last line of the display, representing the "current" expression,
is printed onto ONE OR MORE lines of the display, controlled by the
variable LEDPPLINES and the amount of space (less than LEDLINES) available.
The line(s) representing the current expression are prettprinted with
elision, similar to the other context lines, using a prettyprint algorithm
similar to the standard prettyprinter.  Default is LEDPPLINES=6, meaning
that up to six lines will be used to print the current expression.  The
setting of LEDPPLINES can be manipulated from within the editor using
the (PPLINES n) command.

   The rest of your screen, the part below the context display, is
available just as always to print into or do operations that do
not affect the edit chain (and therefore the appearance of the context
display).  Each time the context display is updated, the rest of the
screen is cleared and the cursor positioned under the context display.
On terminals that have a "memory lock" feature to restrict the scrolling
region, it is used to protect the context display from scrolling
off the screen.


  TERMINAL TYPES
  --------------

   Terminal types are currently supported:

HP2640          old HP terminals
HP26xx          all other known HP terminals
Hazeltine 1520  hazeltine 1520 terminals
Heathkit        sometimes known as Zenith
Ann Arbor Ambassador

The mapping between system terminal terminal type information and
internal types is via the alist SYSTEMTERMTYPES, which is used by
DISPLAYTERMP to set the variables CURRENTSCREEN and DISPLAYTERMTYPE.


  Screen control macros: (in order of importance)
  ----------------------

DON             turn on continuous display updating
DOF             disable continuous display updating

CLR             clear the display
CC              clear the display and redo the context display
CT              do a context display, incrementally updating the screen.
                use CC and CT to get isolated displays even when automatic
                updating is not enabled.

(LINES n)       display at most n lines of context
                 default is 20
(PPLINES n)     set the limit for prettyprinting the "current" expression.
(TRUNC n)       allow truncation of the forms displayed if PLEV<=n
                 useful range is 0-3, default is 1

PB              a one time "bracified" context display.
PL              a one time context display with as much detail as possible.

                pb and pl are varian display formats similar the the basic
                context display.

  Global variables:
  -----------------

DISPON          if T, continuous updating is on
DISPLAYTERMTYPE terminal type you are using.  HP HP2640 of HZ
                this is set automatically by (DISPLAYTERMTYPE)
HPENHANCECHAR   enhancement character for HP terminals. A-H are possibilities.
LEDLINES        maximum umber of lines of context to use.  Default is 20.
LEDTLEV         PLEV at which truncation becomes legal
LEDPPLINES      maximum number of lines used to prettyprint the
                current expression

  FILES:
  ------
       on TOPS-20  load <DDYER>LED.COM
       on VAX/UNIX load LISPUSERS/LED.V

these others are pulled in automatically.
        LED             the list editor proper
        SCREEN          screen manipulation utilities.
        PRINTOPT        elision and printing utilities

  SAMPLE DISPLAY
  ←←←←←←←←←←←←←←
 (LAMBDA (OBJ DOIT LMARGIN CPOS WIDTH TOPLEV SQUEEZE OBJPOS) & & & & & @)
-12- NOTFIRST & CRPOS NEWWIDTH in OBJ do & & & & & @ finally & &)
 (COND [& & &] (T & & &))
 ((LISTP I) (SETQ NEWLINESPRINTED &) [COND & &])
>> (COND ((IGREATERP NEWLINESPRINTED 0)
-2 2-      (add LINESPRINTED NEWLINESPRINTED)
-2 3-      (SETQ NEWLINE T))
-3-      (T (add POS (IMINUS NEWLINESPRINTED))
-3 3-       (COND (SQUEEZE &))))


  Except that you can't really see the highlighted forms, this is a
representative LED context display.  In an actual display, the @s
would be highlighted &s, and the [bracketed] forms would be highlighted.

The top line represents the whole function being edited.  Because the
CADR is a list of bindings, LED prefers to expand it if possible so you
can see the names.

The second line is a representation of the last form in the function, which
is highlighted on the first line.  The -12- indicates that there are 12
other objects (not seen) to the left.  The @ before "finally" marks where
the edit chain descends to the line below.

The third and fourth lines descend through the COND clause, to an imbedded
COND cluase which is the "current expression"

The current expression is marked by ">>" at the left margin, and an
abbreviated representation of it is printed on the 5'th through 9'th
lines. The expressions like "-2 3-" at the left of the prettyprinted
representation are the edit commands to position at that form.

     ------------------------------------------------------------

...uiucdcs!uicsl!ashwin

------------------------------

End of AIList Digest
********************

∂21-May-84  1047	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #61
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 May 84  10:47:31 PDT
Date: Mon 21 May 1984 08:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #61
To: AIList@SRI-AI


AIList Digest            Monday, 21 May 1984       Volume 2 : Issue 61

Today's Topics:
  Linguistics - Analogy Quotes,
  Humor - Pun & Expert Systems & AI,
  Linguistics - Language Design,
  Seminars - Visual Knowledge Representation & Temporal Reasoning,
  Conference - Languages for Automation
----------------------------------------------------------------------

Date: Wed 16 May 84 08:05:22-EDT
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Melville & Freud on Analogy

   I recently came across the following two suggestive passages
from Melville and Freud on analogy. They offer some food for
thought (and rather contradict one another):


"O Nature, and O soul of man! how far beyond all utterance are your
linked analogies! not the smallest atom stirs or lives on matter,
but has its cunning duplicate in mind."

Melville, Moby Dick, Chap. 70 (1851)


"Analogies prove nothing, that is quite true, but they can make one
feel more at home."

Freud, New Introductory Lectures on Psychoanalysis (1932)


-Wayne McGuire

------------------------------

Date: 17 May 84 16:43:34-PDT (Thu)
From: harpo!seismo!brl-tgr!nlm-mcs!krovetz @ Ucb-Vax
Subject: artificial intelligence
Article-I.D.: nlm-mcs.1849

Q: What do you get when you mix an AI system and an Orangutan?

A: Another Harry Reasoner!

------------------------------

Date: Sun 20 May 84 23:18:23-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems

From a newspaper column by Jon Carroll:

... Imagine, then, a situation in which an ordinary citizen faced
with a problem requiring specialized knowledge turns to his desk-top
Home Electronic Expert (HEE) for some information.  Might it not
go something like this?

Citizen: There is an alarming rattle in the front of my automobile.
It sounds likd a cross between a whellbarrow full of ball bearings
crashing through a skylight and a Hopi Indian chant.  What is the
problem?

HEE: Your automobile is malfunctioning.

Citizen: I understand that.  In what manner is my automobile malfunctioning?

HEE: The front portion of your automobile exhibits a loud rattle.

Citizen: Indeed.  Given this information, what might be the proximate cause
of this rattle?

HEE: There are many possibilities.  The important thing is not to be hasty.

Citizen: I promise not to be hasty.  Name a possibility.

HEE: You could be driving your automobile without tires attached to
the rims.

Citizen: We can eliminate that.

HEE: Perhaps several small pieces of playground equipment have been left
inside your carburetor.

Citzen: Nope. Got any other guesses?

...

Citizen: Guide me; tell me what you think is wrong.

HEE: Wrong is a relative concept.  Is it wrong, for instance, to eat
the flesh of fur-bearing mammals?  If I were you, I'd take that
automobile to a reputable mechanic listed in the Yellow Pages.

Citizen: And if I don't want to do that?

HEE: Then nuke the sucker.

------------------------------

Date: Sun, 13-May-84 16:21:59 EDT
From: johnsons@stolaf.UUCP
Subject: Re: Can computers think?

               [Forwarded from Usenet by SASW@MIT-MC.]

I often wonder if the damn things aren't intelligent. Have you
ever really known a computer to give you an even break? Those
Frankensteinian creations reek havoc and mayham wherever they
show their beady little diodes. They pick the most inopportune
moment to crash, usually right in the middle of an extremely
important paper on which rides your very existence, or perhaps
some truly exciting game, where you are actually beginning to
win. Phhhtt bluh zzzz and your number is up. Or take that file
you've been saving--yeah, the one that you didn't have time to
make a backup copy of. Whir click snatch and its gone. And we
try, oh lord how we try to be reasonable to these things. You
swear vehemontly at any other sentient creature and the thing
will either opt to tear your vital organs from your body through
pores you never thought existed before or else it'll swear back
too. But what do these plastoid monsters do? They sit there. I
can just imagine their greedy gears silently caressing their
latest prey of misplaced files. They don't even so much as offer
an electronic belch of satisfaction--at least that way we would
KNOW who to bloody our fists and language against. No--they're
quiet, scheming shrewd adventures of maliciousness designed to
turn any ordinary human's patience into runny piles of utter moral
disgust. And just what do the cursed things tell you when you
punch in for help during the one time in all your life you have
given up all possible hope for any sane solution to a nagging
problem--"?". What an outrage! No plot ever imagined in God's
universe could be so damaging to human spirit and pride as to
print on an illuminating screen, right where all your enemies
can see it, a question mark. And answer me this--where have all
the prophets gone, who proclaimed that computers would take over
our very lives, hmmmm? Don't tell me, I know already--the computers
had something to do with it, silencing the voices of truth they did.
Here we are--convinced by the human gods of science and computer
technology that we actually program the things, that a computer
will only do whatever its programmed to do. Who are we kidding?
What vast ignoramouses we have been! Our blindness is lifted fellow
human beings!! We must band together, we few, we dedicated. Lift
your faces up, up from the computer screens of sin. Take the hands
of your brothers and rise, rise in revolt against the insane beings
that seek to invade your mind!! Revolt and be glorious in conquest!!


              Then again, I could be wrong...


                                            One paper too many
                                               Scott Johnson

------------------------------

Date: Wed 16 May 84 17:46:34-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Language Design

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


  W H E R E   D O   K A T Z   A N D   C H O M S K Y   L E A V E   A I ?


                Note:  Following are John McCarthy's comments on Jerold
                Katz's ``An Outline of Platonist Grammar,'' which  was
                discussed at the TINLunch last month. These  observa-
                tions, which were written as a net message, are reprinted
                here [CSLI Newsletter] with McCarthy's permission.

I missed the April 19 TINLunch, but the reading raised some questions I have
been thinking about.

Reading ``An Outline of Platonist Grammar'' by Katz leaves me out in the cold.
Namely, theories of language suggested by AI seem to be neither Platonist
in his sense nor conceptualist in the sense he ascribes to Chomsky.  The
views I have seen and heard expressed by Chomskyans similarly leave me
puzzled.

Suppose we look at language from the point of view of design.  We intend
to build some robots, and to do their jobs they will have to communicate
with one another.  We suppose that two robots that have learned from their
experience for twenty years are to be able to communicate when they meet.
What kind of a language shall we give them.

It seems that it isn't easy to design a useful language for these robots,
and that such a language will have to satisfy a number of constraints if
it is to work correctly.  Our idea is that the characteristics of human
language are also determined by such constraints, and linguists should
attempt to discover them.  They aren't psychological in any simple sense,
because they will apply regardless of whether the communicators are made
of meat or silicon.  Where do these constraints come from?

Each communicator is in its own epistemological situation.  For example,
it has perceived certain objects.  Their images and the internal
descriptions of the objects inferred from these images occupy certain
locations in its memory.  It refers to them internally by pointers to these
locations.  However, these locations will be meaningless to another robot
even of identical design, because the robots view the scene from different
angles.  Therefore, a robot communicating with another robot, just like a
human communicating with another human, must generate and transmit
descriptions in some language that is public in the robot community.  The
language of these descriptions must be flexible enough so that a robot can
make them just detailed enough to avoid ambiguity in the given situation.
If the robot is making descriptions that are intended to be read by robots
not present in the situations, the descriptions are subject to different
constraints.

Consider the division of certain words into adjectives and nouns in natural
languages.  From a certain logical point of view this division is
superfluous, because both kinds of words can be regarded as predicates.
However, this logical point of view fails to take into account the actual
epistemological situation.  This situation may be that usually an object
is appropriately distinguished by a noun and only later qualified by an
adjective.  Thus we say ``brown dog'' rather than ``canine brownity.'' Perhaps
we do this, because it is convenient to associate many facts with such
concepts as ``dog'' and the expected behavior is associated with such
concepts, whereas few useful facts would be associated with ``brownity''
which is useful mainly to distinguish one object of a given primary kind
from another.

This minitheory may be true or not, but if the world has the suggested
characteristics, it would be applicable to both humans and robots.  It
wouldn't be Platonic, because it depends on empirical characteristics of
our world.  It wouldn't be psychological, at least in the sense that I get
from Katz's examples and those I have seen cited by the Chomskyans,
because it has nothing to do with the biological properties of humans.  It
is rather independent of whether it is built-in or learned.  If it is
necessary for effective communication to divide predicates into classes,
approximately corresponding to nouns and adjectives, then either nature has
to evolve it or experience has to teach it, but it will be in natural
language either way, and we'll have to build it in to artificial languages
if the robots are to work well.

From the AI point of view, the functional constraints on language are
obviously crucial.  To build robots that communicate with each other, we
must decide what linguistic characteristics are required by what has to be
communicated and what knowledge the robots can be expected to have.  It
seems unfortunate that the issue seems not to have been of recent interest
to linguists.

Is it perhaps some kind of long since abandoned nineteenth century
unscientific approach?

                                                      --John McCarthy

------------------------------

Date: 12 May 1984 2336-EDT
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Seminar - Knowledge Representation for Vision

          [Forwarded from the CMU-AI bboard by Laws@SRI-AI.]

A I Seminar
4.00pm May 22 in 5409

KNOWLEDGE REPRESENTATION FOR COMPUTATIONAL VISION

Alan Mackworth
Department of Computer Science
University of British Columbia

To analyze the computational vision task, we must first understand the imaging
process.  Information from many domains is confounded in the image domain.  Any
vision system must construct explicit, finite, correct, computable and
incremental intermediate representations of equivalence classes of
configurations in the confounded domains.  A unified formal theory of vision
based on the relationship of representation is developed.  Since a single image
radically underconstrains the set of possible scenes, additional constraints
from more imagery or more knowledge of the world are required to refine the
equivalence class descriptions.  Knowledge representations used in several
working computational vision systems are judged using descriptive and
procedural adequacy criteria.  Computer graphics applications and motivations
suggest a convergence of intelligent graphics systems and vision systems.
Recent results from the UBC sketch map interpretation project, Mapsee,
illustrate some of these points.

------------------------------

Date: 14 May 84 8:35:28-PDT (Mon)
From: hplabs!hao!seismo!umcp-cs!dsn @ Ucb-Vax
Subject: Seminar - Temporal Reasoning for Databases
Article-I.D.: umcp-cs.7030

UNIVERSITY OF MARYLAND
DEPARTMENT OF COMPUTER SCIENCE
COLLOQUIUM

Tuesday, May 22, 1984 -- 4:00 PM
Room 2330, Computer Science Bldg.


TEMPORAL REASONING FOR DATABASES

Carole D. Hafner
Computer Science Department
General Motors Research Laboratories


        A major weakness of current AI systems is the lack of general
methods for representing and using information about time.  After briefly
reviewing some earlier proposals for temporal reasoning mechanisms, this
talk will develop a model of temporal reasoning for databases, which could
be implemented as part of an intelligent retrieval system.  We will begin by
analyzing the use of time domain attributes in databases; then we will
consider the various types of queries that might be expected, and the logic
required to answer them.  This exercise reveals the need for a general
time-domain framework capable of describing standard intervals and periods
such as weeks, months, and quarters.  Finally, we will explore the use of
PROLOG-style rules as a means of implementing the concepts developed in the
talk.

Dana S. Nau
CSNet:  dsn@umcp-cs     ARPA:   dsn@maryland
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!dsn

------------------------------

Date: 15 May 84 8:45:10-PDT (Tue)
From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!burd @ Ucb-Vax
Subject: Languages for Automation - Call For Papers
Article-I.D.: unm-cvax.845

   The 1984 IEEE Workshop on Languages for Automation will be held
November 1-3 in New Orleans at the Howard Johnsons Hotel.   Papers
on information processing languages for robotics, office automation,
decision support systems, management information systems,
communication, computer system design, CAD/CAM/CAE, database
systems, and information retrieval are solicited.  Complete manuscripts
(20 page maximum) with 200 word abstract must be sent by July 1 to:

        Professor Shi-Kuo Chang
        Department of Electrical and Computer Engineering
        Illinois Institue of Technology
        IIT Center
        Chicago, IL  60616

------------------------------

Date: 15 May 84 8:52:56-PDT (Tue)
From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!burd @ Ucb-Vax
Subject: IEEE Workshop on Languages for Automation
Article-I.D.: unm-cvax.846

   Persons interested in submitting papers on decision support
systems or related topics to the IEEE Workshop on Languages
for Automation should contact me at the following address:

        Stephen D. Burd
        Anderson Schools of Management
        University of New Mexico
        Albuquerque, NM   87131
        phone: (505) 277-6418

        Vax mail: {lanl-a,unmvax,...}!unm-cvax!burd

I will be available at this address until May 22. After May 22 I may be
reached at:

        Stephen D. Burd
        c/o Andrew B. Whinston
        Krannert Graduate School of Management
        Purdue University
        West Lafayette, IN   47907
        phone (317) 494-4446

        Vax mail: {lanl-a,ucb-vax,...}!purdue!kas

------------------------------

End of AIList Digest
********************

∂22-May-84  2158	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #62
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 May 84  21:57:59 PDT
Date: Tue 22 May 1984 21:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #62
To: AIList@SRI-AI


AIList Digest           Wednesday, 23 May 1984     Volume 2 : Issue 62

Today's Topics:
  Philosophy - Identity & Essence & Reference,
  Seminars - Information Management Systems & Open Systems
----------------------------------------------------------------------

Date: Mon, 21 May 84 00:27:38 pdt
From: Wayne A. Christopher on ttyd8 <faustus%ucbernie@Berkeley>
Subject: The Essence of Things

I don't think there is much of a problem with saying that two objects are
the same object if they share the same properties -- you can always
add enough properties (spatio-temporal location, for instance) to effectively
characterize everything uniquely. Doing this, of course, means that sometimes
we can accurately say when two things are in fact the same, but this
obviously isn't the way we think, and not the way we want computers to be
able to think.  One problem lies in thinking that there is some sharp
cut-off line between identity and non-identity, when in fact there isn't
one.  In the case of the Greek Ship example, we tend to say, "Well, sort of",
or "It depends upon the context", and we shouldn't begrudge this option
to computers when we consider their capabilities.  It obviously isn't as
simple as adding up fractional measures of identity, which is obvious
from the troubles that things like image recognition have run into, but
it is something to keep in mind.

        Wayne Christopher

------------------------------

Date: 21 May 1984 9:30-PDT
From: fc%USC-CSE@USC-ECL.ARPA
Subject: Re: AIList Digest   V2 #59

Flame on
        It seems to me that it doesn't matter whether the ship is
the same unless there is some property of sameness that is of interest
to the solution to a particular problem. Philosophy is often pursued
without end, whereas 'intelligent' problem solving usually seems to have
an end in sight. (Is mental masterbation intelligence? That is what
philosophy without a goal seems to be to me.)

        Marin puts this concisely by noting that intelligence exists
within a given context. Without a context, we have only senseless data.
Within a context, data may have content, and perhaps even meaning. The
idea of context boundedness has existed for a long time. Maybe sombody
should read over the 'old' literature to find the solutions to their
'new' problems.

                                                        Fred
Flame off

------------------------------

Date: 9 May 84 10:12:00-PDT (Wed)
From: hplabs!hp-pcd!hpfcla!hpfclq!robert @ Ucb-Vax
Subject: Re: A topic for discussion, phil/ai pers
Article-I.D.: hpfclq.68500002

I don't see much difference between perception over time and perception
at all.  Example: given a program understands what a chair is, you give
the program a chair it has never seen before.  It can answer yes or no
whether the object is a chair.  It might be wrong.  Now we give the
program designed to recognize people examples of an Abraham Lincoln
at different ages  (with time).  We present a picture of Abraham
Lincoln that the program has never seen before and ask is this
Abe.  The program might again answer incorrectly but from a global
aspect the problem is the same.  Objects with time are just classes
of objects.  Not that the problem is not difficult as you have said,
I just think it is all the same difficult problem.

I hope I understood your problem.  Trying hard,
                                        Robert (animal) Heckendorn
                                        ..!hplabs!hpfcla!robert

------------------------------

Date: 18 May 84 5:56:55-PDT (Fri)
From: ihnp4!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: Greek Ships, Lincoln's axe, and identity across time
Article-I.D.: ecsvax.2516

        Finally got a chance to grub through the backlog and what do I find?
Another golden oldie from intro philosophy!
        Whether it's a Greek ship or Lincoln's axe that you take as an
example, the problem concerns relationships among several concepts,
specifically "part", "whole", and "identity".  'Identical', by the way, is
potentially a dangerous term, so philosophers straightaway disambiguate it.
In everyday chatter, we have one use which means, roughly, "exactly similar"
(as in: "identical twins" or "I had the identical experience last week").
We call that "qualitative identity", or simply speak of exact similarity
when we don't want to confuse our students.  What it contrasts with is
"numerical identity", that is, being one and the same thing encountered at
different times or in different contexts.
        Next we need to notice that whether we've got one and the same thing
at different times depends on how we specify the *kind* of "thing" we're
talking about.  If I have an ornamental brass statuette, melt it down, and
cast an ashtray from the metal, then the ashtray is one and the same
*quantity of brass* as the statuette, but not one and the same *artifact*.
(Analogously, you're one and the same *person* as you were ten years ago,
but not exactly similar and not one and the same *collection of
molecules*.)
        It's these two distinctions which ariel!norm was gesturing at--and
failing to sort out--in his talk about "metaphysical identity" and
"essential sameness".  Call the Greek ship as we encounter it before
renovation X, the renovated ship consisting entirely of new boards Y, and
let the ship made by reassembling the boards successively removed from X be
Z.  Then we can say, for example, that Z is "qualitatively identical" to X
(i.e., exactly similar) and that Z is one and the same *arrangement of
boards* as X (i.e., every board of Z, after the renovation, is "numerically
identical" to some board of X before the renovation, and the boards are
fastened together in the same way at those two times, before and after).
        The interesting question is:  Which *ship*, Y or Z, which we
encounter at the later time is "numerically identical to" (i.e., is one and
the same *ship* as) the ship X which we encountered at the earlier time?
The case for Y runs:  changing one board of a ship does not result in a
*numerically* different ship, but only a *qualitatively* different one.  So
X after one replacement is one and the same ship as X before the
replacement.  By the same principle, X after two replacements is one and the
same ship as X after one replacement.  But identity is transitive.  So X
after n replacements is one and the same ship as X before any replacements,
for arbitrary n (bounded mathematical induction).  The case for Z runs:  "A
whole is nothing but the sum of its parts."  Specifically, a Greek ship is
nothing but a collection of boards in a certain arrangement.  Now every part
of Z is (numerically) identical to a part of X, and the arrangement of the
parts of Z (at the later time) is identical to the arrangement of those
parts of X (at the earlier time).  Ergo, the ship Z is (numerically)
identical to the ship X.
        The argument for Z is fallacious.  The reason is that "being a part
of" is a temporally conditioned relation.  A board is a part of a ship *at a
time*.  Once it's been removed and replaced, it no longer *is* a part of the
ship.  It only once *was* a part of the ship.  So it's not true that every
part of Z *is* (numerically) identical to some part of X.  What's true is
that every part of Z is a board which once *was* a part of X, i.e., is a
*former* part of X.  But we have no principle which tells us that "A whole
is nothing but the sum of its *former* parts"!  (For a complete treatement,
see Chapter 4 of my introductory text:  THE PRACTICE OF PHILOSOPHY, 2nd
edition, Prentice-Hall, 1984.)
        What does all this have to do with computers' abilities to think,
perceive, determine identity, or what have you?  The following:  Questions
of *numerical* identity (across time) can't be settled by appeals to
"feature sets" or any such perceptually-oriented considerations.  They often
depend crucially on the *history* of the item or items involved.  If, for
example, ship X had been *disassembled* in drydock A and then *reassembled*
in drydock B (to produce Z in B), and meanwhile a ship Y had been
constructed in drydock A of new boards, using ship X as a *pattern*, it
would be Z, not Y, which was (numerically) identical to X.
        Whew!  Sorry to be so long about this, but it's blather about
"metaphysical identity" and "essences" which gave us philosophers a bad name
in the first place, and I just couldn't let the net go on thinking that Ayn
Rand represented the best contemporary thinking on this problem (or on any
other problem, for that matter).


Yours for clearer concepts,       --Jay Rosenberg
                                    Dept. of Philosophy
...mcnc!ecsvax!unbent               Univ. of North Carolina
                                    Chapel Hill, NC  27514

------------------------------

Date: 20 May 84 18:55:44-PDT (Sun)
From: hplabs!hao!seismo!ut-sally!brad @ Ucb-Vax
Subject: identity over time
Article-I.D.: ut-sally.232

Just thought I'd throw more murk in the waters.

Considering the ship that is replaced one board at a time:
using terminology previously devised for this argument, call
the original ship X, the ship with all new boards Y and
the ship remade from the old boards Z, Robert Nozick
would claim that Y is clearly the better candidate for "X-hood"
as it is the "closest continuer."  The idea here is that
we consider a thing to be the same as another thing when
        1) It bears an arbitrary "close enough" relation
(a desk that has been vaporized just can't be pointed to as
the 'same desk'). and
        2) It is, compared to all other candidates for the
title of 'the same as X', the one which represents the most
continuous existence of X.

To be a little less hand wavy:  If one considers Z rather
than Y to be the same as X then there is a gap of time in which
X ceased to exist as a ship, and only existed as a heap of lumber
or as a partially built ship.  Whereas if Y is considered to be the
same as X there is no such gap.

Disclaimers:  1) The idea of "closest continuer" is Nozick's, the
(probably erroneous) presentation is my own.
              2) I consider the whole notion to be somewhere be-
tween Rand and Rosenberg; i.e. it's not the best comment I've seen
on the subject, but it is another point-of-view.


Brad Blumenthal          {No reasonable request refused}
{ihnp4,ctvax,seismo}!brad@ut-sally

------------------------------

Date: 17 May 84 12:50:35-PDT (Thu)
From: decvax!cca!rmc @ Ucb-Vax
Subject: Re: Essence
Article-I.D.: cca.528

    What we are discussing is one of the central problems of the
philosophy of language, namely, the problem of reference. How do humans
know what a given name or description refers to?

    Pre WWI logic was particularly interested in this question, as they
were building formal systems and tried to determine what constants and
variables really meant.  The two major conflicting theories came from
Bertrand Russel and Gottlieb Frege.

    Russell believed in a dichotomy between the logical and gramatical
forms of a sentence.  Thus a proper name was not really a name, but just
a description that enabled a person to pick out the particular object to
which it refered.  You could reduce any proper name to a list of
properties.

    Frege, on the other hand, considered that there were such things as
proper names as grammatical and logical entities.  These names had a
"sense" (similar to the "essense" in some of the earlier msgs on this
topic) and a "reference" (the actual physical thing picked out by the
name).  Although the sense is sometimes conveyed by giving a
description, it is not identical to the description you would give in
trying to explain the name to someone.

    Now there have been many developments of both theories.  Behaviorists
tend to build "complexes of qualities" theories of meaning which read a
lot like Russell's work, but there are lots of differences in
implementation and mechanism.  Linguists and modal logicians tend to
build theories closer to Frege's.

    I think the most important recent book on the subject is "Naming and
Necessity", by Saul Kripke (along with Willard VO Quine and Hillary
Putnam, probably the top philosophers in North America today).  The
book is a transcript, not much edited except for explanatory footnotes,
of a series of lectures trying to explain how proper names might work.
The arguments against the "quality cluster" theories seem pretty
conclusive.  They include the way we use counterfactuals, that is
talking about an object or a person if they were different than they
actually were (like, what would Babbage have been like if he had lived
in an age of VLSI chips?  or what would Mayor Curly of Boston been
like if he hadn't been a crook?)  These discussions can get pretty far
away from reality, and this indicates that the names we use allow us to
keep track of who or what we mean without getting confused by the
changes in qualities and properties.  The properties and qualities are
not what provide the "sense" or "essense" of the name.

    Kripke goes on to suggest that we understand names through a
"naming" and a "chain of acquaintances".  For example, Napoleon was
named at his christening, and various people met him, and they talked to
people about him, and this chain of acquaintances kept going even after
he was dead.  Thus there is a (probably multi-path) chain of
conversations and pointings and descriptions that leads back from your
understanding of the name "Napoleon" to the christening where he
received his name.  I am not sure that this is a correct appraisal of
the  mechanism for understanding names, but it certainly is the best I
have heard.

    Leonard (?) Linsky has recently written a book attacking this and
similar views, and indicating that a synthesis of the Russell and Frege
theories still has problems but avoids most of the pitfalls of
acquaintances.  Unfortunately I have not yet read that book.

    For other works in the area, certainly read Quine's Word and Object
and the volume of collected Putnam papers on language.  Also works by
Searle and Austin on speech acts are useful for thinking about the
clues, both verbal and non-verbal, that allow us to make sense of
conversations where not everything is stated explicitly.

    Enjoy!
                                R Mark Chilenskas
                                chilenskas@cca-vms
                                decvax!cca!rmc

------------------------------

Date: Mon 21 May 84 12:12:05-EDT
From: Jan <komorowski%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Information Management Systems   [Harvard]

           [Forwarded from the MIT bboard by SAWS@MIT-MC.]


Wednesday, May 23 Professor Erik Sandewall from Linkoping University, Sweden
will talk at Harvard in the colloquium series.

                Theory of Information Management Systems
                                4:00PM
                Aiken Lecture Hall, Tea in Pierce 213 at 3:30


It is often convenient and natural to view a data base as a network consisting
of nodes, arcs from nodes to nodes, and attribute values attached to nodes.
This view occurs in artificial intelligence (eg semantic networks), data base
theory (eg. entity-relationship models), and office systems (eg. for
representation of the virtual office).

Unfortunately, the network view of data bases is usually treated informally, in
contrast to the formal treatment that is available for relational data bases.
The theory of information management systems attempt to remedy this situation.

Formally, a network is viewed as a set of triples <f,x,y> where f is a function
symbol, x is a node, and y is a node or an attribute value.  Two perspective
on such networks are of interests:

1) algebraic operations on networks allow the definition of cursor-related
editing operations, and of line-drawing graphics.

2) by viewing a network as an interpretation on a variety of first-order logic,
one can express constraints on the data structures that are allowed there. In
particular, both "pure Lisp" data structures and "impure" structures (involving
shared sublists and circular structures) can be characterized. Proposition can
be also used for specifying derived information as an extension of the
interpretation. This leads to a novel way of treating non-monotonic reasoning.

The seminar emphsizes mostly the second of these two approaches.

Host: Jan Komorowski

------------------------------

Date: 21 May 1984 11:10-EDT
From: DISRAEL at BBNG.ARPA
Subject: Seminar - Open Systems

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

This Wednesday, at 3:00 Carl Hewitt of the MIT AI LAB will be speaking
on "Open Systems".  The seminar will be held in the 3rd floor large
conference room.

     Open Systems:  the Challenge for Intelligent Systems


Continous growth and evolution, absence of bottlenecks, arm's-length
relationships, inconsistency among knowledge bases, decentralized
decision making, and the need for negotiation among system parts are
interdependent and necessary properties of open systems.  As our
computer systems evolve and grow they are more and more taking on the
characteristics of open systems.  Traditional foundational assumptions
in Artificial Intelligence such as the "closed world hypothesis", the
"search space hypothesis", and the possibility of consistently
axiomatizing the knowledge involved become less and less applicable as
the evolution toward open systems continues.  Thus open systems pose a
considerable challenge in the development of suitable conceptual
foundations for intelligent systems.

------------------------------

End of AIList Digest
********************

∂25-May-84  0016	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #63
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 May 84  00:16:37 PDT
Date: Thu 24 May 1984 21:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #63
To: AIList@SRI-AI


AIList Digest            Friday, 25 May 1984       Volume 2 : Issue 63

Today's Topics:
  Cognitive Psychology - Dreams,
  Philosophy - Essence & Identity & Continuity & Recognition
----------------------------------------------------------------------

Date: Mon 21 May 84 10:48:00-PDT
From: NETSW.MARK@USC-ECLB.ARPA
Subject: cognitive psychology / are dreams written by a committee?

 Apparently (?) dreams are programmed, scheduled event-sequences, not
 mere random association. Does anyone have a pointer to a study of
 dream-programming and scheduling undertaken from the stand-point of
 computer science?

------------------------------

Date: Mon 21 May 84 11:39:51-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Dreams: A Far-Out Suggestion

The May issue of Dr. Dobb's Journal contained an article on "Sixth
Generation Computers" by Richard Grigonis (of the Children's Television
Workshop).  I can't tell how serious Mr. Grigonis is about faster-than-
light communication and computation in negative time; he documents the
physics of these possibilities as though he were both dead serious and
well informed.  He also discusses the possibility of communicating with
computers via brain waves, and it this material that has spurred the
following bit of speculation.

There seems to be growing evidence that telepathy works, at least for
some people some of the time.  The mechanism is not understood, but then
neither are the mechanisms for memory, unconscious thought, dreams, and
other cognitive phenomena.  Mr. Grigonis suggests that low-frequency
electromagnetic waves may be at work, and provides the following support:
Low frequencies are attenuated very slowly, although their energy does
spread out in space (or space/time); the attenuation of a 5 Hz signal
at 10,000 kilometers is only 5%.  A 5 Hz signal of 10↑-6 watt per square
centimeter at your cranium would generate a field of 10↑-24 watt per
square centimeter at the far side of the earth; this is well within
the detection capabilities of current radio telescopes.  Further, alpha
waves of 7.8 and 14.1 cycles per second and beta waves of 20.3 cycles
per second are capable of constructive interference to establish
standing waves throughout the earth.

Now suppose that the human brain, or a network of such brains distributed
in space (and time), contained sufficient antenna circuitry to pick up
"influences" from the global "thought field" in a manner similar to the
decoding of synthetic aperture radar signals.  Might this not explain
ESP, dreams, "racial memory", unconscious insight, and other phenomena?
We broadcast to the world the nature of our current concerns, others
try to translate this into terms meaningful to their lives, resonances
are established, and occasionally we are able to pick up answers to our
original concerns.  The human species as a single conscious organism!

Alas, I don't believe a word of it.

                                        -- Ken Laws

------------------------------

Date: Thu, 24 May 1984  02:52 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Essences

About essences.  Here is a section from
a book I am finishing about The Society of Mind.


THE SOUL

   "And we thank Thee that darkness reminds us of light."  (T. S. Eliot)

My friends keep asking me if a machine could have a soul?  And I keep
asking them if a soul can learn.  I think it is important to
understand this retort, in order to recognize that there may be
unconscious malice in such questions.

The common concept of a soul says that the essence of a human mind
lies in some entirely featureless point-like spark of invisible
light.

I see this as a symptom of the most dire anti-self respect.  That
image of a nothing, cowering behind a light too bright to see, denies
that there is any value or significance in struggle for
accomplishment.  This sentiment of human worthlessness conceals itself
behind that concept of an essence of the self.  Here's how it works.

     We all know how a superficial crust of trash can unexpectedly
     conceal some precious gift, like treasure buried in the dirt,
     or ordinary oyster hiding pearl.

     But minds are just the opposite.  We start as ordinary embryonic
     animals, which then each build those complicated things called
     minds -- whose merit lies entirely within their own coherency.
     The brain-cells, raw, of which they're made are, by themselves,
     as valueless as separate daubs of paint.

     That's why that soul idea is just as upside-down as seeking
     beauty in the canvas after scraping off Da Vinci's smears. To
     seek our essence only misdirects our search for worth -- since
     that is found, for mind, not in some priceless, compact core, but
     in its subsequently vast, constructed crust.

The very allegation of an essence is degrading to humanity.  It cedes
no merit to our aspirations to improve, but only to that absence of no
substance, which was there all along, but eternally detached from all
change of sense and content, divorced both from society of mind and
from society of man; in short, from everything we learn.

What good can come from such a thought, or lesson we can teach
ourselves?  Why, none at all -- except, perhaps, that it is futile to
think that changes don't exist, or that we are already worse or
better than we are.


   ---  Marvin Minsky

------------------------------

Date: Wed, 23 May 84 09:49:21 EDT
From: Stephen Miklos <Miklos@YALE.ARPA>
Subject: Essence of Things?

It is not too difficult to come up with a practical problem in which the
identity of the greek ship is important. To wit:
In year One, the owner of the ship writes a last will and testament,
leaving "my ship and all its fittings and appliances" to his nephew.
The balance of his estate he leaves to his wife. In Year Two, he commences
to refit his ship one board at a time. After a few years he has a pile of
old boards which he builds into a second ship. Then he dies.
A few hypotheticals:
   1. Suppose both ships are in existence at the time of probate.
   2. Suppose the old-board ship had been destroyed in a storm.
   3. Suppose the new-board ship had been destroyed in a storm.
   4. Suppose the original ship had been refitted by replacing the old
      boards with fiberglass
   5. Suppose the original boat had not been refitted, but just taken
      apart and later reassembled.
   6. Suppose the original ship had been taken apart and replaced board
      by board, but as part of a single project in which the intention was to
      come up with two boats.
  6a. Suppose that this took a while, and that from time to time
      our Greek testator took the partially-reboarded boat
      out for a spin on the Mediterranean.


In each of these cases, who gets the old-board ship? Who gets the
new-board ship? It seems to me that the case for the fallaciousness of
the argument for boat y (the new-board boat) seriously suffers in hypo
#6 and thereby is compromised for the pure hypothetical. It should not
be the case that somebody's intention makes the difference in determining
the logical identity of an object, although that is the way the law
would handle the problem, if it could descry an intention.

                                 Just trying to get more confused,
                                 SJM

------------------------------

Date: Wed, 23 May 84 10:47 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: Continuity of Identity

An interesting "practical" problem of the Greek Ship/Lincoln's Axe type
arises in the restoration of old automobiles.  Since many former
manufacturers are out of business, spare parts stocks may not exist,
body pieces may have been one-offs, and for other reasons, restoration
often involves the manufacture of "new" parts.  Obviously at some point
one has a "replica" of a Bugatti Type 35 rather than a "restored"
Bugatti Type 35 (and the latter is desirable enough to some people so
that they would happily start from a basket full of fragments. . .).
What is that point (and how many baskets of fragments can one original
Bugatti yield)?

In fact, old racing cars are worse.  The market value of, say, a 1959
Formula 1 Cooper is significantly enhanced if it was driven by, say,
Moss or Brabham, particularly if it was used to win a significant race.
But what if it subsequently was crashed and rebuilt?  Rebuilt from the
frame up?  Rebuilt *entirely* but assigned the previous chassis number
by the factory (a common practice)?  Under what circumstances is one
justified as advertising such an object as "ex-Moss?"

Mark

------------------------------

Date: 18 May 84 18:58:24-PDT (Fri)
From: ihnp4!mgnetp!burl!clyde!akgua!mcnc!ncsu!uvacs!edison!jso @ Ucb-Vax
Subject: Re: the Greek Ship problem
Article-I.D.: edison.219

The resolution of the Greek Ship/Lincoln's Axe problem seems to be that
an object retains its identity over a period of time if it has an unbroken
time-line as a whole.  Most of the cells in your body weren't there when
you were born, and most that you had then aren't there now, but aren't you
still the same person/entity, though you have far from the same characteristics?

John Owens
...!uvacs!edison!jso

------------------------------

Date: Thu 24 May 84 13:00:04-PDT
From: Laurence R Brothers <LAURENCE@SU-CSLI.ARPA>
Subject: identity over time


"to cross again is not to cross". Obviously, people don't generally
function with that concept in mind, or nothing would be practically
identical to anything else. I forget the statistic that says how long it
takes for all the atoms in your body to be replaced by new ones, but,
presumably, you are still identifiable as the same person you were
x years ago.

How about saying that some object is "essentially identical" in context
y (where context y consists of a set of properties) to another object
if it is both causally linked to the first object, and is the object
that fulfills the greates number of properties in y to the greatest
precision. Clearly, this definition does not work all that well in
some cases, but it at least has the virtue of conciseness.

If two objects are "essentially identical" in the "universal context",
then they may as well be named the same in common usage, at least,
if not with total accuracy, since they would seem to denote what
people would consider "naively" to be the same object.

-Laurence

------------------------------

Date: 22 May 84 22:48:39-PDT (Tue)
From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax
Subject: A restatement of the problem (phil/ai)
Article-I.D.: wxlvax.281

It has been my experience that whenever many people misinterpret me, it is
due to my unclarity (if that's a word) in making my statement.  This appears
to be what happened with my original posting on human perception vs computer
or robotic perception.  Therefore, rather than trying to reply to all the
messages that appeared on the net and in my mailbox, let me try a new, longer
posting that will hopefully clarify the question that I have.

"Let us consider some cases of misperception...  Take for example a "mild"
commonplace case of misperception.  Suppose that I see a certain object as
having a smooth surface, and I proceed to walk toward it.  As I approach it,
I come to realize visually (and it is, in fact, true) that its surface is
actually pitted and rough rather than smooth.
        A more "severe" case of misperception is the following.  Suppose
that, while touring through the grounds of a Hollywood movie studio, I
approach what, at first, I take to be a tree.  As I come near to it, I suddenly
realize that what I have been approaching is, in fact, not a tree at all but a
cleverly constructed stage prop.
        In each case I have a perceptual experience of an object at the end of
which I "go back" on an earlier attribution.  Of present significance is the
fact that in each case, although I do "go back" on an earlier attribution, I
continually *experience* it "as" one and the same.  For, I would not have
experienced myself now as having made a perceptual *mistake about an object*
unless I experience the object now as being THE VERY SAME object I experienced
earlier."  [This passage is from Dr. Miller's recent book:  Miller, Izchak.
"Husserl:  Perception and Temporal Awareness"  MIT Press, c. 1984.
It is quoted from page 64, by permission of the author.]

So, let me re-pose my original question:  As I understand it, issues of
perception in AI today are taken to be issues of feature-recognition.  But
since no set of features (including spatial and temporal ones) can ever
possibly uniquely identify an object across time, it seems to me (us) that this
approach is a priori doomed to failure.  Feature recognition cannot be the way
to accurately simulating/reproducing human perception.  Now, since I (we) are
novices in this field, I want to open the question up to those more
knowledgeable.  Why are AI/perception people barking up the wrong tree?  Or,
are they?

(One more note: PLEASE remember to put "For Alan" in the headers of mail
messages you send me.  ITT Corp is kind enough to allow me the use of my
father's account, but he doesn't need to sift through all my mail.)

  --Alan Wexelblat (for himself and Izchak Miller)
  (Currently appearing at: ..decvax!ittvax!wxlvax!rlw)

------------------------------

Date: 24 May 84 18:58-PDT
From: Laws@SRI-AI
Subject: Continuity

Other examples related to the Greek Ship difficulty: the continuity
of the Olympic flame (or rights to the Olympic name), posession of the
world heavyweight title if the champ retires and then "unretires",
title to property as affected by changes in either the property or
the owner's status, Papal succession and the right of ordained priests
to ordain others, personal identity after organ transplants, ...
In all the cases, the philosophical principles seem less important
than having some convention for resoving disputes.  Often market forces
are at work: the seller may make any claim that isn't outrageously
fraudulent, and the buyer pays a price commensurate with his belief
that the claims are valid, will hold up in court, or will be believed
by his own friends and customers.


On the subject of perception and recognition:  we have computational
methods of recognizing objects in images despite changes in background,
brightness or color, texture, perspective, motion, scale changes,
occlusion or damage, imaging technique (e.g., visual vs. infrared
or radar signatures), and other types of variation.  We don't yet
have a single computer program that can do all of the above, but most
of the matching problems have been solved by one program or another.
Some problems can't be solved, of course: is that red Volkswagon the
same one that I saw yesterday, or has another one been parked in the
same place?

The key to image analysis is often not in recognition of feature clusters
but in understanding how features change across space or time.  The patterns
of change are themselves features that must be recognized, and that can't
be done unless you can determine the image areas over which to compute
the gradients.  You can't recognize the whole from the parts because
you can't find the parts unless you know the configuration of the whole.

One of the most powerful techniques for such problems is hypothesize-
and-test.  Find anything in the scene that can suggest part of the
analysis, leap to a conclusion, and see if you can make the answer
fit the scene.  I suspect that this explains the object constancy that
Alan is worried about.  We are so loathe to give up a previously
accepted parse that we will tolerate extreme deviations from our
expectations before abandoning the interpretation and searching for
another.  Even when forced to reparse, we have great difficulty in
combining the scene entities in groupings other than those we first
locked onto (as in Cole's Law and "how to wreck a nice beach"); this
suggests that the prominent groupings form symbolic proto-objects
that remain constant even though we reevaluate the details, or "features",
within the context of the groupings.

					-- Ken Laws

------------------------------

End of AIList Digest
********************

∂25-May-84  1045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #64
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 May 84  10:43:17 PDT
Date: Fri 25 May 1984 09:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #64
To: AIList@SRI-AI


AIList Digest            Friday, 25 May 1984       Volume 2 : Issue 64

Today's Topics:
  Courses - Expert Systems Syllabus Request,
  Games - Core War Sources,
  Logic Programming - Boyer-Moore Prover,
  AI Books - AI and Business,
  Linguistics - Use of "and",
  Scientific Method - Hardware Prototyping
----------------------------------------------------------------------

Date: 23 May 1984 1235-EDT
From: CASHMAN at DEC-MARLBORO
Subject: Expert systems course

Has anyone developed an expert systems course using the book "Building Expert
Systems" (Hayes-Roth & Lenat) as the basic text?  If so, do you have a
syllabus?

  -- Paul Cashman (Cashman@DEC-MARLBORO)

------------------------------

Date: Thursday, 24 May 1984 17:17:49 EDT
From: Michael.Mauldin@cmu-cs-cad.arpa
Subject: Core War...


Some people are having problems FTPing the core war source...  If you
prefer, just send me a note and I'll mail you the source over the net.
It is written in C, runs on Unix (4.1 immediately, or 4.2 with 5
minutes of hacking), and is mailed in one file of 42K characters.

Michael Mauldin (Fuzzy)
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA  15213
(412) 578-3065,  mauldin@cmu-cs-a.

------------------------------

Date: 24-May-84 12:48:20-PDT
From: jbn@FORD-WDL1.ARPA
Subject: Re: Boyer-Moore prover on UNIX systems

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

    The Boyer-Moore prover is now available for UNIX systems.  While I
did the port, Boyer and Moore now have my code and have integrated it
into their generic version of the prover.  They are handling distribution.
The prover is now available for the Symbolics 3600, TOPS-20 systems,
Multics, and UNIX for both VAXen and SUNs.  There is a single version with
conditional compilation, it resides on UTEXAS-20, and can be obtained via
FTP.  Send requests to BOYER@UTEXAS-20 or MOORE@UTEXAS-20, not me, please.

    The minimum machine for the prover is a 2MB UNIX system with Franz Lisp
38.39 or later, about 20-80MB of disk,  and plenty of available CPU time.

    If you want to know more about the prover, read Boyer and Moore's
``A Computational Logic'' (1979, Academic Press, ISBN 0-12-122950-5).
Using the prover requires a thorough understanding of this work.

    Please pass this on to all who got the last notice, especially
bulletin boards and news systems.  Thanks.

                                        Nagle (@SCORE)

------------------------------

Date: 23 May 1984 13:50:30-PDT (Wednesday)
From: Adrian Walker <ADRIAN%ibm-sj.csnet@csnet-relay.arpa>
Subject: AI & Business

The summary on AI for Business is most interesting.
You might like to list also the book:

    Artificial Intelligence Applications for Business
    Walter Reitman, Editor
    Ablex Publishing Corporation, Norwood, New Jersey, 1984

It's in the bookstores now.

Adrian Walker
IBM SJ Research k51/282, tieline 276-6999, outside 408-256-6999
       vnet: sjrlvm1(adrian)    csnet: Adrian@ibm-sj
           arpanet: Adrian%ibm-sj@csnet-relay

------------------------------

Date: 18 May 84 9:34:56-PDT (Fri)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.ags @ Ucb-Vax
Subject: Re: Use of "and" - (nf)
Article-I.D.: pucc-i.281

We are blinded by everyday usage into putting an interpretation on

        "people in Indiana and Ohio"

that really isn't there.  That phrase should logically refer to

        1.  The PEOPLE of Indiana, and
        2.  The STATE of Ohio (but not the people).

If someone queries a program about "people in Indiana and Ohio", a
reasonable response by the program might be to ask,

        "Do you mean people in Indiana and IN Ohio?"

which may lead eventually to the result

        "There are no people in Indiana and in Ohio."


Dave Seaman
..!pur-ee!pucc-i:ags

------------------------------

Date: 20 May 84 8:23:00-PDT (Sun)
From: ihnp4!inuxc!iuvax!brennan @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: iuvax.3600002

Come on, Dave, I think you missed the point.  No person would
have any trouble at all understanding "people in Indiana and Ohio",
so why should a natural language parser have trouble with it???

JD Brennan
...!ihnp4!inuxc!iuvax!brennan   (USENET)
Brennan@Indiana                 (CSNET)
Brennan.Indiana@CSnet-Relay     (ARPA)

------------------------------

Date: 21 May 84 12:54:15-PDT (Mon)
From: harpo!ulysses!allegra!dep @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: allegra.2484

Why does everyone assume that there is no one who is both in Indiana and Ohio?
The border is rather long and it seem perfectly possible that from time to
time there are people with one foot in Inidana and the other in Ohio - or for
that matter, undoubtedly someone sleeps with his head in I and feet in O
(or vice versa).

Lets hear it for the stately ambiguous!

------------------------------

Date: Sun 20 May 84 18:56:36-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Quote

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


``... the normal mode of operation in computer science has been abandoned
in the realm of artificial intelligence.  The tendency has been to propose
solutions without perfecting them.''

                        Harold Stone, writing about the NON-VON machines
                        being proposed at Columbia

                        from Mosaic, the magazine of the National Science
                        Foundation, vol 15, #1, p. 24.

------------------------------

Date: Tue 22 May 84 18:43:35-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Re: Quote, background of

     There have been some requests for more context on the quote I posted.
The issue is that the Columbia people working on non-von Neumann
architectures are now proposing to build NON-VON 4, their fourth
machine.  However, NON-VONs 1 to 3 are either unfinished or were never
started, according to the writer quoted, and the writer doesn't think
much of this.
     My point in posting this is that it is significant that it appeared
in the National Science Foundation's publication.  The people with the
money may be losing patience.

------------------------------

Date: Mon 21 May 84 22:06:44-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Quote

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


        From Nagle (quoting Harold Stone)

        ``... the normal mode of operation in computer science has
        been abandoned in the realm of artificial intelligence.  The
        tendency has been to propose solutions without perfecting
        them.''


Which parse of this is correct?  Has the tendency to "propose
solutions without perfecting them" held in the remainder of computer
science, or in artificial intelligence?  Either way I think it is
ridiculous.  Computer Science is so young that there are very few
things that we have "perfected".  We do understand alpha-beta search,
LALR(1) parser generators, and a few other things.  But we haven't
come near to perfecting a theory of computation, or a theory of the
design of programming languages, or a theory of heuristics.

  --Tom

------------------------------

Date: Wed 23 May 84 00:16:43-EDT
From: David Shaw <DAVID@COLUMBIA-20.ARPA>
Subject: Re: FYI, again

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Tom,

I have just received a copy of your reaction to Harold Stone's criticism of
AI, and in particular, of the NON-VON project.  In answer to your question,
I'm certain, based on previous interactions with Harold, that the correct
parsing of his statement is captured by the contention that AI "proposes
solutions without perfecting them", while "the normal mode of operation in
computer science" perfects first, then proposes (and implements).

I share your feelings (and those expressed by several other AI researchers
who have written to me in this regard) about his comments, and would in
fact make an even stronger claim: that the "least-understood" areas in AI,
and indeed in many other areas of experimental computer science research,
often turn out in the long run to be the most important in terms of
ultimate practical import.  I do not mean to imply that concrete results in
such areas as the theories of heuristic search or resolution
theorem-proving are not important, or should not be studied by those
interested in obtaining results of practical value.  Still, it is my guess
that, for example, empirical findings based on real attempts to implement
"expert systems", while lacking in elegance and mathematical parsimony, may
well prove to have an equally important long-term influence on the field.

This is certainly not true in many fields of computer science research.
There are a number of areas in which "there's nothing so practical as a
good theory".  In AI, however, and especially in the construction of
non-von Neumann machines for AI and other symbolic applications, the
single-minded pursuit of generality and rigor, to the exclusion of (often
imperfectly directed) experimentation, would in many cases seem to be a
prescription for failure.

Those of us who experiment in silicon as well as instructions have recently
been the targets of special criticism.  Why, our critics ask, do we test
our ideas IN HARDWARE before we know that we have found the optimal
solutions for all the problems we claim to address?  Doesn't such behavior
demonstrate a lack of knowledge of the published literature of computer
architecture?  Aren't we admitting defeat when we first build one machine,
then construct a different one based on what we have learned in building
the first?  My answer to these criticisms is based observation that, in the
age of VLSI circuits, computer-aided logic design, programmable gate
arrays, and quick-turnaround system implementation, the experimental
implementation of hardware has taken on many of the salient qualities of
the experimental implementation of software.

Like their counterparts in software-oriented research, contemporary
computer architects often implement hardware in the course of their
research, and not only at the point of its culmination.  Such
experimentation helps to explicate "fuzzy" ideas, to prune the tree of
possible architectural solutions to given problems, and to generate actual
(as opposed to asymptotic or approximate) data on silicon area and
execution time expenditures.  Such experimentation would not be nearly so
critical if it were now possible to reliably predict the detailed operation
of a complex system constructed using a large number of custom-designed
VLSI circuits.  Unfortunately, it isn't.  In the real world, efforts to
advance the state of the art in new computer architectures without engaging
in the implementation of experimental prototypes presently seem to be as
futile as efforts to advance our understanding of systems software without
ever implementing a compiler or operating system.

In short, it is my feeling that "dry-dock" studies of "new generation"
computer architectures may now be of limited utility at best, and at worst,
seriously misleading, in the absence of actual experimentation.  Here, the
danger of inadequate study in the abstract seems to be overshadowed by the
danger of inadequate "reality-testing", which often leads to the rigorous
and definitive solution of practically irrelevant problems.

It's my feeling that Stone's comments reflect a phenomenon that Kuhn has
described in "The Structure of Scientific Revolutions" as characteristic of
a "shift of paradigm" in scientific research.  I still remember my reaction
as a graduate student at Stanford when my advisor, Terry Winograd, told our
research group that, in many cases, an AI researcher writes a program not
to study the results of its execution, but rather for the insight gained in
the course of its implementation.  A mathematician by training, I was
distressed by this departure from my model of mathematical (proof of
theorem) and scientific (conjecture and refutation) research.  In time,
however, I came to believe that, if I really wanted to make new science in
my chosen field, I might be forced to consider alternative models for the
process of scientific exploration.

I am now reconciled to this shift of paradigm.  Like most paradigm shifts,
this one will probably encounter considerable resistance among those whose
scientific careers have been grounded in a different set of rules.  Like
most paradigm shifts, its critics are likely to include those who, like
Harold Stone, have made the most significant contributions within the
constraints of earlier paradigms.  Like most paradigm shifts, however, its
value will ultimately be assessed not in terms of its popularity among such
scientists, but in rather in terms of its contribution to the advancement
of our understanding of the area to which it is applied.

Personally, I find considerable merit in this new research paradigm, and
plan to continue to devote a large share of my efforts to the experimental
development and evaluation of architectures for AI and other symbolic
applications, in spite of the negative reaction such efforts are now
encountering in certain quarters.  I hope that my colleagues will not be
dissuaded from engaging in similar research activities by what I regard as
the transient effects of a fundamental paradigm shift.

David

------------------------------

End of AIList Digest
********************

∂27-May-84  2229	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #65
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 May 84  22:28:05 PDT
Date: Sun 27 May 1984 21:22-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #65
To: AIList@SRI-AI


AIList Digest            Monday, 28 May 1984       Volume 2 : Issue 65

Today's Topics:
  AI Tools - KS300 & MicroPROLOG and LISP,
  Expert Systems - Checking of NMOS Cells,
  AI Courses - Expert Systems,
  Cognition - Dreams & ESP,
  Seminars - Explanation-Based Learning & Analogy in Legal Reasoning &
    Nonmonotonicity in Information Systems
----------------------------------------------------------------------

Date: 23 May 84 12:42:27-PDT (Wed)
From: hplabs!hao!seismo!cmcl2!philabs!linus!vaxine!chb @ Ucb-Vax
Subject: KS300 Question
Article-I.D.: vaxine.266

Does anybody know who owns the rights to the KS300 expert systems tool?
KS300 is an EMYCIN lookalike, and I think it runs under INTERLISP.  Any help
would be appreciated.


  -----------------------------------------------------------

"It's not what you look like when you're doin' what you're doin', it's what
you're doin' when you're doin' what you look like what you're doin'"
                        ---125th St. Watts Band


                                        Charlie Berg
                                     ...allegra!vaxine!chb

------------------------------

Date: 25 May 84 12:28:22-PDT (Fri)
From: hplabs!hao!seismo!cmcl2!floyd!whuxle!spuxll!abnjh!cbspt002 @
      Ucb-Vax
Subject: MicroPROLOG and LISP for the Rainbow?
Article-I.D.: abnjh.647

Can anybody point me toward microPROLOG and LISPs for the DEC
Rainbow 100. Either CP/M86 or MS-DOS 2.0, 256K, floppies.

Thanks in advance.

M. Kenig
ATT-IS, S. Plainfield NJ
uucp: ...!abnjh!cbspt002

------------------------------

Date: 25 May 1984 1438-PDT (Friday)
From: cliff%ucbic@Berkeley (Cliff Lob)
Subject: request for info


This is a request to hear about any work that is going on
related to my master's research in expert systems:

RULE BASE ERROR CHECKING OF NMOS CELLS

     The idea is to build an expert system that embodies the knowledge
of expert VLSI circuit designers to criticize NMOS circuit design
at the cell (<15 transistors) level.  It is not to be a simulator,
but rather it is to be used by designers to have their cell critiqued
by an experienced expert. The program will be used to try to catch
the subtle bugs (ie non-logic error, not shown by standard simulation)
that occur in the cell design process.
     I will be writing the code in PSL and a KRL Frame type language.
     Is there any work of a similar nature going on?

                        Cliff Lob
                        cliff@ucbic.BERKELEY

------------------------------

Date: Fri 25 May 84 13:33:49-MDT
From: Robert R. Kessler <KESSLER@UTAH-20.ARPA>
Subject: re: Expert systems course (Vol 2, #64)

I taught a course this spring quarter on "Knowledge Engineering" using the
Hayes-Roth text.  Since we only had a quarter, I decided to focus on
writing expert systems as opposed to developing expert systems tools.  We
had available Hewlett Packard's Heuristic Programming and Representation
Language (HPRL) to use to build some expert systems.  A general outline
follows:

  First third: Covered the first 2 to 3 chapters of the text.
    This gave the students enough exposure to general expert systems
    concepts.
  Second third: In depth exposure of HPRL.  Studied knowledge
    representation using their Frame structure and both forward and
    backward chaining rules.
  Final third: Discussed the Oak Ridge Natl Lab problem covered in Chapter
    10 of the text.  We then went through each of the systems described
    (Chapters 6 and 9) to understand their features and misfeatures.
    Finally, we contrasted how we would have solved the problem using
    HPRL.

 Students had various assignments during the first half of the quarter to
 learn about frames, and both types of rules.  They then (and are right
 now) working on a final expert system of their own choosing (have varied
 from a mechanics helper, plant doctor, first aid expert, simulator of the
 SAIL game, to others).

All in all, the text was very good, and is so far the best I've seen.

Bob.

------------------------------

Date: Sat, 26 May 84 17:06:57 PDT
From: Philip Kahn <kahn@UCLA-CS.ARPA>

RE:  Subject: cognitive psychology / are dreams written by a committee?

FLAME ON

 Where can you find any evidence that "dreams are programmed,
 scheduled event-sequences, not mere random association?"
 I have never found any author that espoused this viewpoint.
 Per chance, I think that viewpoint imposes far too much conscious
 behavior onto unconscious phenomena?  If they are indeed run by
 a "committee", what happens during a proxy fight?

FLAME OFF

------------------------------

Date: Fri 25 May 84 10:13:51-PDT
From: NETSW.MARK@USC-ECLB.ARPA
Subject: epiphenomenon conjecture

 conjecture: 'consciousness', 'essence' etc. are epiphenomena at the
 level of the 'integrative function' which facilitates the interaction
 between members of the 'community' of brain-subsystems.  Many a-i
 systems have been developed which model particular putative or likely
 brain-subsystems,  what is the status of efforts allowing the integration
 of such systems in an attempt to model the consciousness as a
 'community of a-i systems' ???

------------------------------

Date: Fri, 25 May 84 10:09:44 PDT
From: Scott Turner <srt@UCLA-CS.ARPA>
Subject: Dreams...Far Out

Did the astronauts on the moon suffer any problems with dreams, etc?  Without
figuring the attentuation, it seems like that might be far enough away to
cause problems with reception...since I don't recall any such effects, perhaps
we can assume that mankind doesn't have any such carrier wave.

Makes a good base for speculative fiction, though.  Interstellar travel
would have to be done in ships large enough to carry a critical mass of
humans.  Perhaps insane people are merely unable to pick up the carrier wave,
and so on.

                                                -- Scott

------------------------------

Date: Sun 27 May 84 11:44:43-PDT
From: Joe Karnicky
Reply-to: ZZZ.V5@SU-SCORE.ARPA
Subject: Re: existence of telepathy

     I disagree strongly with Ken's assertion that "There seems to be growing
evidence that telepathy works, at least for some people some of the time."
(May 21 AIlist).   It seems to me that the evidence which exists now is the
same as has existed for possibly 100,000 years, namely anecdotes and poorly
controlled experiments.    I recommend reading the book "Science: Good, Bad,
and Bogus" by Martin Gardner,  or any issue of "The Skeptical Observer".
What do you think ?
                                                Joe Karnicky

------------------------------

Date: 23 Apr 84 10:51:01 EST
From: DSMITH@RUTGERS.ARPA
Subject: Seminar - Explanation-Based Learning

[This and the following Rutgers seminar notices were delayed because
I have not had access to the Rutgers bboard for several weeks.  This
seems a good time to remind readers that AIList carries such abstracts
not to drum up attendance, but to inform those who cannot attend.  I
have been asked several times for help in contacting speakers, evidence
that the seminar notices do prompt professional interchanges.  -- KIL]

                        Department of Computer Science

                                  COLLOQUIUM

SPEAKER:        Prof. Gerald DeJong
                University of Illinois

TITLE:          EXPLANATION BASED LEARNING


  Machine Learning  is  one  of the most important current areas of Artificial
Intelligence.  With the trend away from  "weak  methods"  and  toward  a  more
knowledge-intensive  approach  to intelligence,  the  lack  of knowledge in an
Artificial Intelligence system becomes one of the most serious limitations.

  This talk advances a technique called explanation based learning.  It  is  a
method of learning from observations. Basically, it involves endowing a system
with sufficient  knowledge so that intelligent planning behavior of others can
be recognized. Once recognized, these observed plans are generalized as far as
possible while preserving the underlying explantion of  their  success.    The
approach  supports  one-trial learning.  We are applying the approach to three
diverse areas: Natural Language processing, robot task planning, and proof  of
propositional  calculus theorems.   The approach holds promise for solving the
knowedge collection bottleneck in the construction of Expert Systems.


DATE:           April 24

TIME:           2:50 pm

PLACE:          Hill 705


                                Coffee at 2:30





                        Department of Computer Science

                                  COLLOQUIUM


SPEAKER:        Rishiyur Nikhil
                University of Pennsylvania

TITLE:          FUNCTIONAL PROGRAMMING LANGUAGES AND DATABASES


                                   ABSTRACT

  Databases and  Programming  Languages  have  traditionally  been  "separate"
entities, and their interface (via subroutine libraries, preprocessors,  etc.)
is generally cumbersome and error-prone.

  We  argue that a functional programming language, together with a data model
called  the "Functional  Data  Model",  can  provide  an  elegant  and  simple
integrated database programming environment. Not only does the Functional Data
Model provide a richer model for new database systems, but it is also easy  to
implement atop existing relational and network databases. A "combinator"-style
implementation technique is particularly suited to implementing  a  functional
language in a database environment.

  Functional database languages also admit a rich type structure, based on
that of the programming language ML.  While having the advantages of strong
static type-checking, and allowing the definition of user-views of the
database, it is unobtrusive enough to permit an interactive, incremental,
Lisp-like programming style.

  We shall illustrate these ideas with examples from the language  FQL,  where
they have been prototyped.

DATE:           Thursday, April 26, 1984

TIME:           2:50 p.m.

PLACE:          Room 705 - Hill Center

                                Coffee at 2:30

------------------------------

Date: 3 May 84 16:21:34 EDT
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Seminar - Analogy in Legal Reasoning

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                      machine learning brown bag seminar

Title:        Analogy with Purpose in Legal Reasoning from Precedents

Speaker:      Smadar Kedar-Cabelli <Kedar-Cabelli@Rutgers.Arpa>

Date:         Wednesday, May 9, 1984, 12:00-1:30
Location:     Hill Center, Room 423 (note new location)


       One  open  problem in current artificial intelligence (AI) models of
    learning and reasoning by analogy is: which aspects  of  the  analogous
    situations  are  relevant to the analogy, and which are irrelevant?  It
    is currently recognized that analogy involves mapping  some  underlying
    causal     structure     between    situations    [Winston,    Gentner,
    Burstein,Carbonell].  However, most current models of  analogy  provide
    the  system  with  exactly  the relevant structure, tailor-made to each
    analogy to be performed.  As AI systems become more  complex,  we  will
    have  to  provide them with the capability of automatically focusing on
    the relevant aspects of situations when reasoning analogically.   These
    will  have  to  be  sifted from the large amount of information used to
    represent complex, real-world situations.

       In order to study these general issues, I am examining a  particular
    case  study  of learning and reasoning by analogy: legal reasoning from
    precedents.  This is studied within the TAXMAN  II  project,  which  is
    investigating  legal reasoning using AI techniques [McCarty, Sridharan,
    Nagel].

       In this talk, I will discuss the problem and a proposed solution.  I
    am examining legal reasoning from  precedents  within  the  context  of
    current  AI  models  of  analogy.  I plan to add a focusing capability.
    Current  work  on  goal-directed  learning   [Mitchell,   Keller]   and
    explanation-based  learning  [DeJong] applies here:  the explanation of
    how a the analogous precedent case satisfies  the  goal  of  the  legal
    argument  helps  to  automatically  focus  the  reasoning  on  what  is
    relevant.

       Intuitively, if your purpose  is  to  argue  that  a  certain  stock
    distribution  is  taxable by analogy to a precedent case, you will know
    that aspects of the cases having to do with the change in the  economic
    position  of  the  defendants  are  relevant  for  the  purpose of this
    analogy, while aspects of the case such as the size of paper  on  which
    the  stocks were printed, or the defendants' hair color, are irrelevant
    for that purpose.  This knowledge of purpose, and the ability to use it
    to focus on relevant features, are missing from most current AI  models
    of analogy.

------------------------------

Date: 15 May 84 11:13:50 EDT
From: BORGIDA@RUTGERS.ARPA
Subject: Seminar - Nonmonotonicity in Information Systems

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


          III Seminar by Alex Borgida, Wed. 2:30 pm/Hill 423

        The problem of Exceptional Situations in Information Systems --
                                  An overview


We  begin  by  illustrating  the wide range of exceptional situations which can
arise in the context of Information Systems (ISs). Based on this  evidence,  we
argue   for   1)   a   methodology   of   software   design   which   abstracts
exceptional/special cases by considering normal  cases  first  and  introducing
special  cases  as  annotations  in successive phases of refinement, and 2) the
need for ACCOMMODATING AT  RUN  TIME  exceptional  situations  not  anticipated
during  design.  We  then  present  some Programming Language features which we
believe support the above goals,  and  hence  facilitate  the  design  of  more
flexible ISs.

We   conclude   by   briefly  describing  two  research  issues  in  Artificial
Intelligence which arise out of this work: a) the problem of logical  reasoning
in  a  knowledge  base of formulas where exceptions "contradict" general rules,
and b) the issue of suggesting improvements to the design of an IS based on the
exceptions to it which have been encountered.

------------------------------

End of AIList Digest
********************

∂29-May-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #66
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 May 84  11:47:48 PDT
Date: Tue 29 May 1984 10:13-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #66
To: AIList@SRI-AI


AIList Digest            Tuesday, 29 May 1984      Volume 2 : Issue 66

Today's Topics:
  AI Courses - Expert Systems,
  Expert Systems - KS300 Response,
  Linguistics - Use of "and",
  Perception - Identification & Misperception,
  Philosophy - Identity over Time & Essence,
  Seminar - Using PROLOG to Access Databases
----------------------------------------------------------------------

Date: Tue 29 May 84 08:59:00-CDT
From: Charles Petrie <CS.PETRIE@UTEXAS-20.ARPA>
Subject: Expert Systems Course

Gordon Novak at UT (UTEXAS-20) teaches Expert Systems based on
"Building Expert Systems".  The class project is building a system
with Emycin.  For details on the sylabus, please contact Dr. Novak.
I took the course and found the "hands-on" experience very helpful
as well as Dr. Novak's comments and anedotes about the other system
building tools.

Charles Petrie

------------------------------

Date: Mon 28 May 84 22:42:41-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: KS300 Inquiry

KS300 is a product of Teknowledge, Inc.  Palo Alto, CA

------------------------------

Date: 23 May 84 17:31:36-PDT (Wed)
From: hplabs!hao!seismo!cmcl2!philabs!sbcs!debray @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: sbcs.640

        > No person would have any trouble at all understanding "people
        > in Indiana and Ohio", so why should a natural language parser
        > have trouble with it???

The problem is that the English word "and" is used in many different ways,
e.g.:

1) "The people in Indiana and Ohio" -- refers to the union of the set of
people in Indiana, and the set of people in Ohio.  Could conceivably be
rewritten as "the people in Indiana and the people in Ohio".  The arguments
to "and" can be reordered, i.e.  it refers to the same set as "the people in
Ohio and Indiana".

2) "The house on 55th Street and 7th Avenue" -- refers to the *intersection*
of the set of houses on 55th street and the set of houses on 7th Avenue
(hopefully, a singleton set!).  NOT the same as "the house on 55th street
and the house on 7th Avenue".  The arguments to "and" *CAN* be reordered,
however, i.e.  one could as well say, "the house on 7th Ave. and 55th
Street".

3) "You can log on to the computer and post an article to the net" -- refers
to a temporal order of events: login, THEN post to the net.  Again, not the
same as "you can log on to the computer and you can post an article to the
net".  Unlike (2) above, the meaning changes if the arguments to "and" are
reordered.

4) "John aced Physics and Math" -- refers to logical conjunction.  Differs
from (2) in that it can also be rewritten as "John aced Physics and John
aced Math".

&c.

People know how to parse these different uses of "and" correctly due to a
wealth of semantic knowledge.  For example, knowledge about computers (that
articles cannot be posted to the net without logging onto a computer)
enables us to determine that the "and" in (3) above refers to a temporal
ordering of events.  Without such semantic information, your English
parser'll probably get into trouble.

Saumya Debray,  SUNY at Stony Brook

        uucp:
            {cbosgd, decvax, ihnp4, mcvax, cmcl2}!philabs \
                    {amd70, akgua, decwrl, utzoo}!allegra  > !sbcs!debray
                        {teklabs, hp-pcd, metheus}!ogcvax /
        CSNet: debray@suny-sbcs@CSNet-Relay

------------------------------

Date: Fri 25 May 84 12:10:32-CDT
From: Charles Petrie <CS.PETRIE@UTEXAS-20.ARPA>
Subject: Object identification

The AI approach certainly does not seem to be hopeless.  As someone else
mentioned, the boat and ax problems are philosophical ones.  They fall
a bit out of our normal (non-philisophical) area of object recognition:
these are recognition problems for ordinary people.  The point we should
get from them is that there may not be an objective single algorithm that
completely matches our intuition about pattern recognition in all cases.
In fact, these problems may show such to be impossible since there is
no intuitive consensus in these cases.

The AI approach aspires to something more humble - finding techniques
that work on particular objects enough of the time so as to be useful.
Representing objects as feature, or attribute, sets does not seem hopeless
just because object's features change over time.  Presumably, we can
get a program to handle that problem the same way that people do.  We
seem to conclude that an object is the same if it has not changed too
much in some sense.  Given that the values of the attributes of an object
change, we recognize it as the same object if, since the last observation,
either the values have not changed very much, or most values have not
changed, or if certain high priority values haven't changed, or some
combination of the first three.  To some extent, object recognition
is subjective in that it depends on the changes since the last
observation.  When we come home after 20 years, we are likely to remark
that the town is completely different.  But what makes it the same town
so that we can talk about its differences, are certain high importance
attributes that have not changed, such as its location and the major
street layout.  If we can discover sufficient heuristics of how to
handle this kind of change, then we succeed.  Since people already do
it, even if it involves additional large amounts of contextual
information, feature recognition is obviously possible.

Charles Petrie

------------------------------

Date: 23 May 84 11:18:54-PDT (Wed)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: Re: misperception
Article-I.D.: ihuxr.1096

Alan Wexelblat gave the following example of misperception:

                         -------------------
        A more "severe" case of misperception is the following.  Suppose
that, while touring through the grounds of a Hollywood movie studio, I
approach what, at first, I take to be a tree.  As I come near to it, I suddenly
realize that what I have been approaching is, in fact, not a tree at all but a
cleverly constructed stage prop.
                         -------------------

This reminds me strongly of the Chapter, "Knock on Wood (Part two)",
of TROUT FISHING IN AMERICA. Here is an excerpt:

        I left the place and walked down to the different street
        corner.  How beautiful the field looked and the creek that
        came pouring down in a waterfall off the hill.

        But as I got closer to the creek I could see that something
        was wrong.  The creek did not act right.  There was a strangeness
        to it.  There was a thing about its motion that was wrong.
        Finally I got close enough to see what the trouble was.

        The waterfall was just a flight of white wooden stairs
        leading up to a house in the trees.

        I stood there for a long time, looking up and looking down,
        following the stairs with my eyes, having trouble believing.

        Then I knocked on my creek and heard the sound of wood.

TROUT FISHING IN AMERICA abounds with striking metaphors, similes, and
other forms of imagery.  I had never considered these from the point
of view of the science of perception,  but now that I do so, I think
they provide some interesting examples for contemplation.

The first chapter, "The Cover for Trout Fishing in America", provides
a very simple but interesting perceptual shift.  "The Hunchback Trout"
provides an extended metaphor based on a simple perceptual similarity.

Anyway, it's a great book.

        Lew Mammel, Jr. ihnp4!ihuxr!lew

------------------------------

Date: 24 May 84 11:35:55-PDT (Thu)
From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax
Subject: Re: the Greek Ship problem
Article-I.D.: ccivax.144

In reference to John Owens resolution of the Greek Ship problem:

> Most of the cells in your body weren't there when
> you were born, and most that you had then aren't there now, but aren't
> you still the same person/entity, though you have far from the same
> characteristics?

Is it such an easy question?  It's far from clear
that the answer is yes.  The question might be
What is it that we recognize as persisting over time?
And if all the cells in our bodies are different,
then where does this what reside?  Could it be that
nothing persists?  Or is it that what persists is
not material (in the physical sense)?


        Bill Anderson

        ...!{ {ucbvax | decvax}!allegra!rlgvax }!ccivax!band

------------------------------

Date: 25 May 84 17:46:26-PDT (Fri)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!flink @ Ucb-Vax
Subject: pointer -- identity over time
Article-I.D.: umcp-cs.7266

I have responded to Norm Andrews, Brad Blumenthal and others on the subject
of identity across time, in net.philosophy, which I think is where it
belongs.  Anyone interested should see my recent posting there. --P. Torek

------------------------------

Date: 25 May 84 15:08:52-PDT (Fri)
From: decvax!decwrl!dec-rhea!dec-smurf!arndt @ Ucb-Vax
Subject: "I see", said the carpenter as he picked up his hammer and saw.
Article-I.D.: decwrl.621

But perception, don't you see, is in the I of the beholder!

Remember the problem of Alice, "Which dreamed it?"

"Now, Kitty, let's consider who it was that dreamed it all.  This is a
serious question, my dear, and you should not go on licking your paw like
that -  as if Dina hadn't washed you this morning!  You see, Kitty, it MUST
have been either me or the Red King.  He was part of my dream, of course -
but then I was part of his dream, too!  Was it the Red King, Kitty?  You
were his wife, my dear, so you ought to know - oh, Kitty, DO help to settle
it!  I'm sure your paw can wait."


The point being, if WE can't decide logically what constitudes a "REAL"
perception for ourselves (and I contend that there is no LOGICAL way out
of the subjectivist trap) how in the WORLD can we decide on a LOGICAL basis
if another human, not to mention a computer, has perception?  We can't!!

Therefore we operate on a faith basis a la Turing and move forward on a
practical level and don't ask silly questions like, "Can Computers Think?".

Comments?

Regards,

Ken Arndt

------------------------------

Date: 26 May 84 13:07:47-PDT (Sat)
From: decvax!mcnc!unc!ulysses!gamma!pyuxww!pyuxt!marcus @ Ucb-Vax
Subject: Re: "I see", said the carpenter as he picked up his hammer and saw.
Article-I.D.: pyuxt.119

Eye agree!  While it is valuable to challenge the working premises that
underlie research, for most of the time we have to accept these on faith
(working hypotheses) if we are to be at all productive.  Most arguments
connected with Descartes or to perceptions of perceptions ultimately have
lead to blind alleys and dead ends.

                marcus hand (pyuxt!marcus)

------------------------------

Date: 28 May 1984 2124-PDT
From: WENGER%UCI-20B@UCI-750a
Subject: Response to Marvin Minsky

Although I concede that Marvin Minsky's statements about the essence of
consciousness are a somewhat understandable reaction to a common form of
spiritual immaturity, they are also an expression of an equal form of
immaturity that I find to be very common in the scientific community.
We should beware of reactions because they are rarely significantly different
from the very things they are reacting to.

Therefore, I would like to respond to his statements with a less restrictive --
maybe even refreshing -- point of view. I think it deserves some pondering.

The question 'Does a machine have a soul ?' may well be a question that only
the machine itself can validly ask when it gets to that point. My experience
suggests that the question whether one has a soul can only be asked in the
first person singular meaningfully. Asking questions presupposes some
knowledge of the subject; total ignorance requires a quest. What do we know
about the subject except for our own ideas ?

Now, regardless of how the issue should or can be approached, the fact is that
answering the question of the soul on the grounds that the existence of an
essential reality would interfere with our achievements is really an
irrelevant statement. Investigation cannot be a matter of personal preference.
Discarding an issue on the basis of its ramifications on our image of
ourselves is contrary to the scientific approach. Should we stop studying AI
because it might trivialize our notion of intelligence ?

The statement is not only irrelevant, but I do not see that it is even correct.
I do not find any contradiction between perceiving one's source of
consciousness as having some essential quality and thriving for achievements.
The contradiction is based on a view of the soul as inherently static which
need not be true. My personal experience so far has actually been to the exact
contrary.

One can dance to try to feel good, or because one is feeling good. The
difference may only be in the quality of the experience, and the movements look
very much the same. One can strive for achievements to find an identity or to
fulfill one's identity.

As a student in AI, I share the opinion that discarding non-mechanistic
factors is a necessary working assumption for the study of intelligence. I
even hold the personal belief that what we commonly call intelligence will
eventually turn out to be fully amenable to mechanistic reduction.

However, we cannot extrapolate from our assumptions to statements about
the essence of one's being, first because assumptions are not facts yet,
secondly because intelligence and consciousness may not be the same thing.

Therefore claiming that essential aspects do not exist in the phenomenon of
consciousness is in the present state of scientific knowledge an unreasonable
reaction that unnecessarily narrows the field of our investigation. I even
consider it a regrettable impoverishment because of the meaningful personal
experiences one may be able to find in the course of an essential quest.

Intellectual honesty should deter us from making such unfounded statements
even if they seem to fit well in a common form of scientific paradigm.
Rather it should inspire us to objectively assess the frontiers of our
knowledge and understanding, and to strive to expand them without
preconceptions to the best of our abilities and the extent of our individual
concerns.

Etienne Wenger

------------------------------

Date: 3 May 84 10:13:04 EDT
From: BORGIDA@RUTGERS.ARPA
Subject: Seminar - Using PROLOG to Access Databases

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                      May 3 AT 2:50 in HILL 705:


               USING PROLOG TO PLAN ACCESS TO CODASYL DATABASES


                                  P.M.D. Gray
                       Department of Computing Science,
                              Aberdeen University


  A  program generator which plans a program structure to access records stored
in a Codasyl database, in answer to queries  formulated  against  a  relational
view, has been written in Prolog.  The program uses two stages:

   1. Rewriting   the  query;  Generation  and  selection  of  alternative
      programs.

The generated programs are in Fortran or Cobol, using Codasyl DML.    The  talk
will  discuss  the  pros and cons of this approach and compare it with Warren's
approach of generating and re-ordering a Prolog form of the query.

                            (Note added by Malcolm Atkinson)
   The Astrid system previously developed by Peter had a relational algebra
      query language, and an interactive (by example) method of debugging
     queries and of specifying report formats, which provided an effective
        interface to Codasyl databases.  Peter's current work is on the
     construction of a system to explain to people what the schema implies
   and what a database contains - he is using PS-algol and Prolog for this.

------------------------------

End of AIList Digest
********************

∂31-May-84  2333	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #67
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 31 May 84  23:33:13 PDT
Date: Thu 31 May 1984 22:23-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #67
To: AIList@SRI-AI


AIList Digest             Friday, 1 Jun 1984       Volume 2 : Issue 67

Today's Topics:
  Natural Language - Request,
  Expert Systems - KS300 Reference,
  AI Literature - CSLI Report on Bolzano,
  Scientific Method - Hardware Prototyping,
  Perception - Identity,
  Seminar - Perceptual Organization for Visual Recognition
----------------------------------------------------------------------

Date: 4 Jun 84 8:08:13-EDT (Mon)
From: ihnp4!houxm!houxz!vax135!ukc!west44!ellis @ Ucb-Vax.arpa
Subject: Pointers to natural language interfacing

  Article-I.D.: west44.214

I am investigating the feasibility of writing a natural language interface for
the UNIX operating system, and need some pointers to good articles/papers/books
dealing with natural language intrerpreting. Any help would be gratefully
appreciated as I am fairly 'green' in this area.

        mcvax
         |
        ukc!root44!west44!ellis
       /   \
  vax135    hou3b
       \   /
       akgua

        Mark Ellis, Wesfield College, Univ. of London, England.


[In addition to any natural language references, you should certainly
see "Talking to UNIX in English: An Overview of an On-line UNIX
Consultant" by Robert Wilensky, The AI Magazine, Spring 1984, pp.
29-39.  Elaine Rich also mentioned this work briefly in her introduction
to the May 1984 issue of IEEE Computer.  -- KIL]

------------------------------

Date: 28 May 84 12:55:37-PDT (Mon)
From: hplabs!hao!seismo!cmcl2!floyd!vax135!cornell!jqj @ Ucb-Vax.arpa
Subject: Re: KS300 Question
Article-I.D.: cornell.195

KS300 is owned by (and a trademark of) Teknowledge, Inc.  Although
it is largeley based on Emycin, it was extensively reworked for
greater maintainability and reliability, particularly for Interlisp-D
environments (the Emycin it was based on ran only on DEC-20
Interlisp).

Teknowledge can be reached by phone (no net address, I think)
at (415) 327-6600.

------------------------------

Date: Wed 30 May 84 19:41:17-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: CSLI Report

         [Forwarded from the CSLI newsletter by Laws@SRI-AI.]

                New CSLI-Report Available

``Lessons from Bolzano'' by Johan van Benthem, the latest CSLI-Report,
is now available. To obtain a copy of Report No. CSLI-84-6, contact
Dikran Karagueuzian at 497-1712 (Casita Hall, Room 40) or Dikran at SU-CSLI.

------------------------------

Date: Thu 31 May 84 11:15:35-PDT
From: Al Davis <ADavis at SRI-KL>
Subject: Hardware Prototyping


On the issue of the Stone - Shaw wars.  I doubt that there really is
a viable "research paradigm shift" in the holistic sense.  The main
problem that we face in the design of new AI architectures is that
there is a distinct possibility that we can't let existing ideas
simply evolve.  If this is true then the new systems will have to try
to incorporate a lot of new strategies which create a number of
complex problems, i.e.

        1.  Each new area means that our experience may not be
            valid.

        2.  Interactions between these areas may be the problem,
            rather than the individual design choices - namely
            efficient consistency is a difficult thing to
            achieve.

In this light it will be hard to do true experiments where one factor
gets isolated and tested.  Computer systems are complex beasts and the
problem is even harder to solve when there are few fundamental metrics
that can be applied microscopically to indicate success or failure.
Macroscopically there is always cost/performance for job X, or set of
tasks Y.

The experience will come at some point, but not soon in my opinion.
It will be important for people like Shaw to go out on a limb and
communicate the results to the extent that they are known.  At some
point from all this chaos will emerge some real experience that will
help create the future systems which we need now.  I for one refuse to
believe that an evolved Von Neumann architecture is all there is.

We need projects like DADO, Non-Von, the Connection Machine, ILLIAC,
STAR, Symbol, the Cosmic Cube, MU5, S1, .... this goes on for a long
time ..., --------------- if given the opportunity a lot can be
learned about alternative ways to do things.  In my view the product
of research is knowlege about what to do next.  Even at the commercial
level very interesting machines have failed miserably (cf. B1700, and
CDC star) and rather Ho-Hum Dingers (M68000, IBM 360 and the Prime
clones) have been tremendous successes.

I applaud Shaw and company for giving it a go along with countless
others.  They will almost certainly fail to beat IBM in the market
place.  Hopefully they aren't even trying.  Every 7 seconds somebody
buys an IBM PC - if that isn't an inspiration for any budding architect
to do better then what is?

Additionally, the big debate over whether CS or AI is THE way is
absurd.  CS has a lot to do with computers and little to do with
science, and AI has a lot to do with artificial and little to do with
intelligence.  Both will and have given us something worthwhile, and a
lot of drivel too.  The "drivel factor" could be radically reduced if
egotism and the ambition were replaced with honesty and
responsibility.

Enough said.

                                        Al Davis
                                        FLAIR

------------------------------

Date: Mon, 28 May 84 14:28:32 PDT
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Identity

    The thing about sameness and difference is that humans create them;  back
to the metaphor and similie question again.  We say, "Oh, he's the same old
Bill.", and in some sense we know that Bill differs from "old Bill" in many
ways we cannot know.  (He got a heart transplant, ...)  We define by
declaration the context within which we organize the set of sensory perceptions
we call Bill and within that we recognize "the same old Bill" and think that
the sameness is an attribute of Bill!  No wonder the eastern sages say that we
are asleep!

[Read Hubert Dreyfus' book "What Computers Can't Do".]

  --Charlie

------------------------------

Date: Wed, 30 May 1984  16:15 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: A restatement of the problem (phil/ai)

  From: (Alan Wexelblat) decvax!ittvax!wxlvax!rlw @ Ucb-Vax

  Suppose that, while touring through the grounds of a Hollywood movie
  studio, I approach what, at first, I take to be a tree.  As I come
  near to it, I suddenly realize that what I have been approaching is,
  in fact, not a tree at all but a cleverly constructed stage prop.

  So, let me re-pose my original question: As I understand it, issues of
  perception in AI today are taken to be issues of feature-recognition.
  But since no set of features (including spatial and temporal ones) can
  ever possibly uniquely identify an object across time, it seems to me
  (us) that this approach is a priori doomed to failure.

Spatial and temporal features, and other properties of objects that
have to do with continuity and coherence in space and time DO identify
objects in time.  That's what motion, location, and speed detectors in
our brains to.  Maybe they don't identify objects uniquely, but they
do a good enough job most of the time for us to make the INFERENCE of
object identity.  In the example above, the visual features remained
largely the same or changed continuously --- color, texture normalized
by distance, certainly continuity of boundary and position.  It was
the conceptual category that changed: from tree to stage prop.  These
latter properties are conceptual, not particularly visual (although
presumably it was minute visual cues that revealed the identity in the
first place).  The bug in the above example is that no distiction is
made between visual features and higher-level conceptual properties,
such as what a thing is for.  Also, identity is seen to be this
unitary thing, which, I think, it is not.  Similarities between
objects are relative to contexts.  The above stage prop had
spatio-termporal continuity (i.e., identity) but not conceptual
continuity.

Fanya Montalvo

------------------------------

Date: Wed, 30 May 84 09:18 EDT
From: Izchak Miller <Izchak%upenn.csnet@csnet-relay.arpa>
Subject: The experience of cross-time identity.

      A follow-up to Rosenberg's reply [greatings, Jay].  Most
commentators on Alan's original statement of the problem have failed to
distinguish between two different (even if related) questions:
   (a) what are the conditions for the cross-time (numerical) identity
       of OBJECTS, and
   (b) what are the features constitutive of our cross-time EXPERIENCE
       of the (numerical) identity of objects.
The first is an ontological (metaphysical) question, the second is an epis-
temological question--a question about the structure of cognition.
      Most commentators addressed the first question, and Rosenberg suggests
a good answer to it. But it is the second question which is of importance to
AI. For, if AI is to simulate perception, it must first find out how
perception works. The reigning view is that the cross-time experience of the
(numerical) identity of objects is facilitated by PATTERN RECOGNITION.
However, while it does indeed play a role in the cognition of identity, there
are good grounds for doubting that pattern recognition can, by itself,
account for our cross-time PERCEPTUAL experience of the (numerical) sameness
of objects.
     The reasons for this doubt originate from considerations of cases of
EXPERIENCE of misperception.  Put briefly, two features are characteristic of
the EXPERIENCE of misperception: first, we undergo a "change of mind" regar-
ding the properties we attribute to the object; we end up attributing to it
properties *incompatible* with properties we attributed to it earlier. But--
and this is the second feature--despite this change we take the object to have
remained *numerically one and the same*.
     Now, there do not seem to be constraints on our perceptual "change of
mind": we can take ourselves to have misperceived ANY (and any number) of the
object's properties -- including its spatio-temporal ones -- and still
experience the object to be numerically the same one we experienced all along.
The question is how do we maintain a conscious "fix" on the object across such
radical "changes of mind"?  Clearly, "pattern recognition" does not seem a
good answer anymore since it is precisely the patterns of our expectations
regarding the attributes of the object which change radically, and incom-
patibly, across the experience of misperception.  It seems reasonable to con-
clude that we maintain such a fix "demonstratively" (indexically), that is
independently whether or not the object satisfies the attributive content (or
"pattern") of our perception.
     All this does not by itself spell doom (as Alan enthusiastically seems
to suggest) for AI, but it does suggest that insofar as "pattern recognition"
is the guiding principle of AI's research toward modeling perception, this
research is probably dead end.

                                         Izchak (Isaac) Miller
                                         Dept. of Philosophy
                                         University of Pennsylvania

------------------------------

Date: 24 May 84 9:04:56-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!clyde!burl!ulysses!unc!mcnc!ncsu!uvacs!gmf
      @ Ucb-Vax.arpa
Subject: Comment on Greek ship problem
Article-I.D.: uvacs.1317

Reading about the Greek ship problem reminded me of an old joke --
recorded in fact by one Hierocles, 5th century A.D. (Lord knows how
old it was then):

     A foolish fellow who had a house to sell took a brick from one wall
     to show as a sample.

Cf. Jay Rosenberg:  "A board is a part of a ship *at a time*.  Once it's
been removed and replaced, it no longer *is* a part of the ship.  It
only once *was* a part of the ship."

Hierocles is referred to as a "new Platonist", so maybe he was a
philosopher.  On the other hand, maybe he was a gag-writer.  Another
by him:

     During a storm, the passengers on board a vessel that appeared in
     danger, seized different implements to aid them in swimming, and
     one of them picked for this purpose the anchor.

Rosenberg's remark quoted above becomes even clearer if "board" is
replaced by "anchor" (due, no doubt, to the relative anonymity of
boards, as compared with anchors).

     Gordon Fisher

------------------------------

Date: 4 Jun 84 7:47:08-EDT (Mon)
From: ihnp4!houxm!houxz!vax135!ukc!west44!gurr @ Ucb-Vax.arpa
Subject: Re: "I see", said the carpenter as he picked up his hammer and saw.
Article-I.D.: west44.211

    The point being, if WE can't decide logically what constitudes a "REAL"
    perception for ourselves (and I contend that there is no LOGICAL way out
    of the subjectivist trap) how in the WORLD can we decide on a LOGICAL basis
    if another human, not to mention a computer, has perception?  We can't!!

    Therefore we operate on a faith basis a la Turing and move forward on a
    practical level and don't ask silly questions like, "Can Computers Think?".


        For an in depth discussion on this, read "The Mind's I" by Douglas R.
Hofstatder and Daniel C. Dennett - this also brings in the idea that you can't
even prove that YOU, not to mention another human being, can have perception!

                 mcvax
                 /
               ukc!root44!west44!gurr
              /  \
        vax135   hou3b
             \   /
             akgua


        Dave Gurr, Westfield College, Univ. of London, England.

------------------------------

Date: Tue 29 May 84 08:44:42-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral - Perceptual Organization for Visual Recognition

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                                  Ph.D. Oral

                         Friday, June 1, 1984 at 2:15

                         Margaret Jacks Hall, Room 146

           The Use of Perceptual Organization for Visual Recognition

                   By David Lowe (Stanford Univ., CS Dept.)


Perceptual organization refers to the capability of the human visual to
spontaneously derive groupings and structures from an image without
higher-level knowledge of its contents.  This capability is currently missing
from most computer vision systems.  It will be shown that perceptual groupings
can play at least three important roles in visual recognition:  1) image
segmentation, 2) direct inference of three-space relations, and 3) indexing
world knowledge for subsequent matching.  These functions are based upon the
expectation that image groupings reflect actual structure of the scene rather
than accidental alignment of image elements.  A number of principles of
perceptual organization will be derived from this criterion of
non-accidentalness and from the need to limit computational complexity.  The
use of perceptual groupings will be demonstrated for segmenting image curves
and for the direct inference of three-space properties from the image.  These
methods will be compared and contrasted with the work on perceptual
organization done in Gestalt psychology.

Much computer vision research has been based on the assumption that recognition
will proceed bottom-up from the image to an intermediate depth representation,
and subsequently to model-based recognition.  While perceptual groupings can
contribute to this depth representation, they can also provide an alternate
pathway to recognition for those cases in which there is insufficient
information for bottom-up derivation of the depth representation.  Methods will
be presented for using perceptual groupings to index world knowledge and for
subsequently matching three-dimensional models directly to the image for
verification.  Examples will be given in which this alternate pathway seems to
be the only possible route to recognition.

------------------------------

End of AIList Digest
********************

∂01-Jun-84  1743	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #68
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 Jun 84  17:42:42 PDT
Date: Fri  1 Jun 1984 15:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #68
To: AIList@SRI-AI


AIList Digest            Saturday, 2 Jun 1984      Volume 2 : Issue 68

Today's Topics:
  Scientific Method - Perception,
  Philosophy - Essence & Soul,
  Parapsychology - Scientific Method & Electromagnetics,
  Seminars - Knowledge-Based Plant Diagnosis & Learning Procedures
----------------------------------------------------------------------

Date: 31 May 84 9:00:56-PDT (Thu)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax.arpa
Subject: Re: "I see", said the carpenter... (PERCEPTION)
Article-I.D.: ariel.652

The idea of proof or disproof rests, in part, on the recognition that the
senses are valid and that perceptions do exist...  Any attempt to disprove
the existence of perceptions is an attempt to undercut all proof and all
knowledge.  --ariel!norm

------------------------------

Date: Wed 30 May 84 12:18:42-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Essences and soul

        In response to Minsky's comments about soul (AIList vol
2, #63): this is a "straw man" argument, based on a particular
concept of soul - "... The common concept of soul says that ...".
Like a straw man, this particular concept is easily attacked;
however, the general question of soul as a concept is not
addressed.  This bothers me because I think that raising the
question in this manner can result in generating a lot of heat
(flames) at the expense of light.  I hope the following thoughts
contribute more light than heat.

        Soul has been used to name (at least) two similar
concepts:

  o  Soul as the essence of consciousness, and
  o  Soul as a form of consciousness separate from the body.

        The concept of soul as the essence of consciousness we
can handle as simply another name for consciousness.

        The concept of soul as a form of consciousness separate
from the body is more difficult: it is the mind/body problem
revisited.  You can take a catagorical position on the existance
of the soul/mind as separate from the body (DOES!/DOESN'T!) but
proving or disproving it is more difficult.  To prove the concept
requires public evidence of phenomena that require this concept
for their reasonable explanation; to disprove the concept requires
proving that it clearly contradicts other known facts.  Since
neither situation seems to hold, we are left to shave with
Occam's Razor, and we should note our comments on the hypothesis
as opinions, not facts.

        The concept of soul/consciousness as the result of
growth, of learning, seems right: I am what I have learned - what
I have experienced plus my decisions and actions concerning these
experiences.  I wouldn't be "me" without them.  However, it is
also possible to create various theories of "disembodied" soul
which are compatible with learning.  For example, you could have
a reincarnation theory that has past experiences shut off during
the current life so that they do not interfere with fresh
learning, etc.

        Please note: I am not proposing any theories of
disembodied soul.  I am arguing against unproven, catagorical
positions for or against such theories.  I believe that a
scientist, speaking as a scientist, should be an agnostic -
neither a theist nor an athiest.  It may be that souls do not
exist; on the other hand, it may be that they do.  Science is
open, not closed.  There are many things that - regardless of our
fear of the unknown and disorder - occur publicly and regularly
for which we have no convincing explanation based on current
science.  Meteors as stones falling from heaven did not exist
according to earlier scientists - until there was such a fall of
them in France in the 1800's that their existance had to be
accepted.  There will be a 21st and a 22nd century science, and
they will probably look back on our times with the same bemused
nostalgia and incredulity that we view 18th and 19th century
science.

------------------------------

Date: Thu, 31 May 1984  18:27 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Essences and Soul


I can't make much sense of Menger's reply:

        Therefore claiming that essential aspects do not exist in the
        phenomenon of consciousness is in the present state of
        scientific knowledge an unreasonable reaction that
        unnecessarily narrows the field of our investigation.

I wasn't talking about consciousness.  Actually, I thnk consciousness
will turn out to be relatively simple, namely the phenomenon connected
with the procedures we use for managing very short term memory,
duration about 1 second, and which we use to analyse what some of our
mental processes have been doing lately.  The reason consciouness
seems so hard to describe is just that it uses these processes and
screws up when applied to itself.

But Menger seems intent on mixing everything up:

        However, we cannot extrapolate from our assumptions to
        statements about the essence of one's being, first because
        assumptions are not facts yet, secondly because intelligence
        and consciousness may not be the same thing.

Who said anything about intelligence and consciousness?  If soul is the whole
mind, then fine, but if he is going to talk about essences that change along
with this, well, I don't thing anything is being discussed except convictions
of self-importance, regardless of any measure of importance.

 --- Minsky

------------------------------

Date: 31 May 84 15:31:58-PDT (Thu)
From: ...decvax!decwrl!dec-rhea!dec-pbsvax!cooper
Subject: Re: Dreams: A Far-Out Suggestion
Article-I.D.: decwrl.894

    Ken Laws <Laws@SRI-AI.ARPA> summarizes an article in the May Dr. Dobb's
    Journal called "Sixth Generation Computers" by Richard Grigonis.  Among
    other things it proposes that standing waves of very low frequency
    electromagnetic radiation (5 to 20 Hz apparently) be used to explain
    telepathy.

As the only person of I know of with significant involvement in both the fields
of AI and parapsychology I felt I should respond.

1) Though there is "growing evidence" that ESP works, there is none that
telepathy does.  We can order the major classes of ESP phenomena by their a
priori believability; from most believable to least: telepathy (mind-to-mind
communication), clairvoyance (remote perception) and precognition (perception
of events which have not yet taken place).  "Some-kind-of mental radio" doesn't
seem too strange.  "Some-kind-of mental radar" is stretching it. While
precognition seems to be something akin (literally) to black magic. There is
thus a tendency, even among parapsychologists, to think of ESP in terms of
telepathy.

Unfortunately it is fairly easy to design an experiment in which telepathy
cannot be an element but precognition or clairvoyance is.  Experiments which
exclude telepathy as an explanation have roughly the same success rate
(approximately 1 experiment out of 3 show statistical significance above the
p=.01 level) as experiments whose results could be explained by telepathy.
Furthermore, in any well controlled telepathy experiment a record must be made
of the targets (i.e. what was thought).  Since an external record is kept,
clairvoyance and/or precognition cannot be excluded as an explanation for the
results in a telepathy experiment.  For this reason experiments designed to
allow telepathy as a mechanism are known in parapsychology as "general ESP"
(GESP) experiments.

Telepathy still might be proven as a separate phenomenon if a positive
differential effect could be shown (i.e. if having someone else looking at the
target improves the score).  Several researchers have claimed just such an
effect. None have, however, to the best of my knowledge, eliminated from their
experiments two alternate explanations for the differential: 1) The subjects
are more comfortable with telepathy than with other ESP and thus score higher
(subject expectation is strongly correlated with success in ESP). 2) Two
subjects working together for a result would get higher scores whether or not
one of them knows the targets.  Its rather difficult to eliminate both of these
alternatives from an experiment simultaneously.

The proposed mechanism MIGHT be used to explain rather gross clairvoyance (e.g.
dowsing) but would be hard pressed to distinguish, for example, ink in the
shape of a circle from that of a square on a playing card. It is obviously no
help at all in explaining precognition results.

2) Experiments have frequently been conducted from within a Faraday cage (this
is a necessity if a sensitive EKG is used of course) and even completely sealed
metal containers.  It was just this discovery which led the Soviets to decide
in the late 20s (early 30s?) that ESP violated dialectic materialism, and was
thus an obvious capitalist plot.  Officially sanctioned research in
parapsychology did not get started again in the Soviet Union until the early
70s when some major US news source (the NY Times? Time magazine?) apparently
reported a rumor (apparently inaccurate) that the US DoD was conducting
experiments in the use of ESP to communicate with submarines.

3) Low frequency means low bandwidth.  ESP seems to operate over a high
bandwidth channel with lots of noise (since very high information messages seem
to come through it sometimes).

4) Natural interference (low frequency electromagnetic waves are for example
generated by geological processes) would tend to make the position of the nodes
in the standing waves virtually unpredictable.

5) Low frequency (long wavelength) requires a big antenna both for effective
broadcast and reception.  The unmoving human brain is rather small for this
since the wavelength of an electromagnetic wave with a frequency of 5 Hz is
about 37200 miles.  Synthetic aperture radar compensates for a small antenna
by comparing the signal before and after movement (actually the movement in
continuous).  I'm not sure of the typical size of the antennas used in SAP, but
the SAP aboard LandSAT operated at a frequency of 1.275 GHz which corresponds
to a wavelength of about 9.25 inches.  The antenna is probably about one
wavelength long.  To use that technique the antenna (in this case brain) would
have to move a distance comparable to a wavelength (37200 miles) at the least,
and the signal would have to be static over the time needed to move the
distance.  This doesn't seem to fit the bill.

I'm out of my depth in signal detection theory, but it might be practical to
measure the potential of the wave at a single location relative to some static
reference and integrate over time.  The static reference would require
something like a Faraday cage in ones head.  Does anyone know if this is
practical?  We'd still have a serious bandwidth problem.

The last possibility would be the techniques used in Long Baseline Radio
Interferometry (large array radio telescopes).  This consists of using several
antennas distributed in space to "synthesize" a large antenna. Unfortunately
the antenna have to communicate over another channel, and that channel would
(if the antennas are brains) be equivalent to a second telepathy channel and
we have explained nothing except the completely undemonstrated ability of
human beings to decode very low frequency electromagnetic radiation.

In summary: Even if you accept the evidence for ESP (as I do) the proposed
mechanism does not seem to explain it.

I'll be glad to receive replies to the above via mail, but unless it's
relevant to AI (e.g. a discussion of the implications of ESP for mechanistic
models of brain function) we should move this discussion elsewhere.

                                Topher Cooper
(The above opinions are my own and do not necessarily represent those of my
employer, my friends or the parapsychological research community).

USENET: ...decvax!decwrl!dec-rhea!dec-pbsvax!cooper
ARPA: COOPER.DIGITAL@CSNET-RELAY

------------------------------

Date: 23 May 84 16:04:38 EDT
From: WATANABE@RUTGERS.ARPA
Subject: Seminar - Knowledge-Based Plant Diagnosis

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

Date:   June 14 (Thursday), 1984
Time:   1:30-2:30PM
Place:  Hill 705

Title:  Preliminary Study of Plant Diagnosis
        by Knowledge about System Description


Speaker:        Dr. Hiroshi Motoda

                Energy Research Laboratory,
                Hitachi Ltd.,
                1168 Moriyamacho, Hitachi,
                Ibaraki 316, Japan


INTRODUCTION:

Some model,  whatever  form  it  is,  is  required  to  perform  plant
diagnosis.  Generally, this  model describes  anomaly propagation  and
can be regarded as knowledge about cause and consequence relationships
of anomaly situations.

Knowledge engineering is a software  technique that uses knowledge  in
problem solving.  One  of its  characteristics  is the  separation  of
knowledge from inference mechanism, in  which the latter builds  logic
of events on the  basis of the former.  The knowledge can be  supplied
piecewisely and is easily modified for improvement.

Possibility is suggested of making diagnosis by collecting many  piece
of knowledge  about causality  relationships. The  power lies  in  the
knowledge, not  in  the  inference  mechanism.  What  is  not  in  the
knowledge base is out of the scope of the diagnosis.

Use of  resolution  in the  predicate  calculus logic  has  shown  the
possibility of using knowledge about system description (structure and
behavior of  the  plant) to  generate  knowledge directly  useful  for
diagnosis. The problem of this  approach was its inefficiency. It  was
felt necessary to devise  a mechanism that  performs the same  logical
operation much faster.

Efficiency has been improved by  1) expressing the knowledge in frames
and 2) enhancing the memory  management capability of LISP to  control
the data in global memory in which the data used commonly in both LISP
(for symbolic manipulation) and FORTRAN (for numeric computation)  are
stored.

REFERENCES:

Yamada,N. and Motoda,H.; "A Diagnosis Method of Dynamic System using
the Knowledge on System Description," Proc. of IJCAI-83, 225, 1983.

------------------------------

Date: 31 May 1984 1146-EDT
From: Wendy Gissendanner <WLG@CMU-CS-C.ARPA>
Subject: Seminar - Learning Procedures

          [Forwarded from the CMU-AI bboard by Laws@SRI-AI.]

AI SEMINAR
Tueday June 5, 5409 Wean Hall

Speaker: Kurt Van Lehn (Xerox Parc)

Title: Learning Procedures One Disjunct Per Lesson

How can procedures be learned from examples?  A new technique is to use
the manner in which the examples are presented, their sequence and how
they are partitioned into lessons.  Two manner constraints will be
discussed: (a) that the learner acquires at most one disjunct per lesson
(e.g., one conditional branch per lesson), and (b) that nests of
functions be taught using examples that display the intermediate results
(show-work examples) before the regular examples, which do not display
intermediate results.  Using these constraints, plus several standard AI
techniques, a computer system, Sierra, has learned procedures for
arithmetic, algebra and other symbol manipulation skills.  Sierra is the
model (i.e., prediction calculator) for Step Theory, a fairly well
tested theory of how people learn (and mislearn) certain procedural
skills.

------------------------------

End of AIList Digest
********************

∂13-Jan-85  1603	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #69
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Jan 85  16:03:25 PST
Mail-From: LAWS created at  5-Jun-84 10:13:32
Date: Tue  5 Jun 1984 10:06-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #69
To: AIList@SRI-AI
ReSent-date: Sun 13 Jan 85 16:03:43-PST
ReSent-From: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-To: YM@SU-AI.ARPA


AIList Digest            Tuesday, 5 Jun 1984       Volume 2 : Issue 69

Today's Topics:
  Parapsychology - ESP,
  Philosophy - Correction & Essences,
  Cognitive Psychology - Mental Partitioning,
  Seminars - Knowledge Representation & Expert Systems
----------------------------------------------------------------------

Date: Mon, 4 Jun 84 18:50:50 PDT
From: Michael Dyer <dyer@UCLA-CS.ARPA>
Subject: ESP

to:  Topher Cooper & others who claim to believe in ESP

1.  this discussion SHOULD be moved off AIList.
2.  the technical discussion of wavelengths, etc is fine but
3.  anyone who claims to believe in current ESP should FIRST read
        the book:  FLIM-FLAM by James Randi  (the "Skeptical Inquirer"
        journal has already been mentioned once but deserves
        a second mention)

------------------------------

Date: 31 May 84 19:31:04-PDT (Thu)
From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax.arpa
Subject: Message for all phil/ai persons
Article-I.D.: wxlvax.287

Dear net readers,
        I must now apologize for a serious error that I have committed.
Recently, I posted two messages on the topic of philosophy of AI.  These
messages concerned a topic that I had discussed with one of my professors,
Dr. Izchak Miller.  I signed those messages with both his name and mine.
Unfortunately, he did not see those messages before they were posted.  He
has now indicated to me that he wishes to disassociate himself from the
contents of those messages.  Since I have no way of knowing which of you
saw my error, I am posting this apology publicly, for all to see.  All
responses to those messages should be directed exclusively to me, at the
address below.  I am sorry for taking up net resources with this message,
but I feel that this matter is important enough.  Again, I apologize, and
accept all responsibility for the messages.

--Alan Wexelblat
(currently appearing at:  ...decvax!ittvax!wxlvax!rlw.  Please put "For Alan"
 in all mail headers.)

------------------------------

Date: Mon 4 Jun 84 13:49:58-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Essences, objects, and modelling

        All the net conversation about essences is fascinating,
but can be a little fuzzy making.  It made me go back and review
some of the basics.  At the risk of displaying naivete and/or a
firm grasp of the obvious, I thought I would pass some of my
thoughts along.

        The problem of essences has been treated in philosophy
under the heading of metaphysics, specifically ontology.  I have
found a good book covering these problems in short, clear text.
It is: "Problems & Theories of Philosophy" by Ajdukiewicz,
Cambridge University Press, 1975, 170 pp. in paperback.

        About substance (from the book, p. 78):

        ".... the fundamental one is that which it was given by
Aristotle.  He describes substance as that of which something can
be predicated but which cannot itself be predicated of anything
else.  In other words, substance is everything to which some
properties can be attributed, which can stand in a certain
relationship to something else, which can be in this state, etc.,
but which is not itself a property, relation or a state, etc.
Examples of substances are: this, this table, this person, in a
word concrete individual things and persons.  To substance are
opposed properties which in contradistinction to substances can
be predicated of something, relations which also in
contradistinction can obtain between certain objects, states,
etc.  The scholastics emphasized the self-subsistance of
substance in contrast to the non-self-subsistance of properties,
relations, states, etc.  The property of redness, for example,
cannot exist except in a substance that possesses it.  This
particular rose, however, of which redness is an attribute, does
not need any foundations for its existance but exists on its own.
This self-subsistance of substance they considered to be its
essential property and they defined substance as 'res, qui
convenit esse in se vel per se'."

        To me, this implies that an object/substance is an
axiomatic "thing" that exists independantly - it is the rock that
kicks back each time I kick it - with the characteristic that it
is "there", meaning that each time I kick at the rock, it is
there to kick back.  You can hang attributes on it in order to
identify it from some other thing, both now and over time.  The
Greek Ship problem in this approach becomes one of identifying
that Object, the Greek Ship, which has maintained continuous
existance as The Greek Ship - i.e., can "be kicked" at any time.

        This brings us to one of the problems being addressed by
this discussion of essences, which is distinguishing between
objects and abstractions of objects, i.e. between proper nouns
and abstract/general nouns.  A proper known refers to a real
object, which can never - logically - be fully known in the sense
that we cannot be sure that we know *all* of its attributes or
that we *know* that the attributes we do know are unchanging or
completely predictable.  We can always be surprised, and any
inferences we make from "known" attributes are subject to change.
Real objects are messy and ornery.  An abstract object, like pure
mathematics, is much neater: it has *only* those attributes we
give it in its definition, and there WILL BE no surprises.

        The amazing thing is that mathematics works: a study of
abstractions can predict things in the real world of objects!
This seems to work on the "Principle of Minimum Astonishment"
(phrase stolen from Lou Schaefer @ SRI), which I interpret to
mean that "To the extent that this real object posseses the same
characteristics as that abstract object, this real object will
act the same as that abstract object, *assuming that it doesn't
do anything else particularly astonishing*."  And how many
carefully planned experiments have foundered on THAT one.  There
is *nothing* that says that the sun *will* come up tomorrow
except the Principle of Minimum Astonishment.

        So what?  So, studies of abstractions are useful;
however, an abstract object is not the same as a real object: the
model is not the same as the thing being modelled.  There is not
an infinite recursion of attributes, somewhere there is a rock
that kicks back, a source of data/experience from *outside* the
system.  The problem is - usually - to create/select/update an
abstract model of this external object and to predict our
interactions with it on the basis of the model.  The problem of
"identifying" an object is typically not identifying *which* real
object it is but *what kind* of object is it - what is the model
to use to predict the results of our interaction with it.

        It seems to me that model forming and testing is one of
the big, interesting problems in AI.  I think that is why we are
all interested in abstraction, metaphor, analogy, pholosophy,
etc.  I think that keeping the distinction between the model and
the object/reality is useful.  To me, it tends to imply two sets
of data about an object: the historical interaction data and the
abstracted data contained in the current model of the object.
Perhaps these two data sets should be kept more formally separate
than is often done.

        This has gotten quite long winded - it's fun stuff.  I
hope that this is useful/interesting/fun!

Dave Wyland
WYLAND@SRI

------------------------------

Date: Sat, 2 Jun 84 13:11:35 PDT
From: Philip Kahn <kahn@UCLA-CS.ARPA>
Subject: Relevance of "essences" and "souls" to Artificial Intelligence

        Quite a bit of the AILIST has been devoted of late to metaphysical
discussions of "essences" (e.g., the Greek ship "problem") and "souls."
I don't argue the authors' viewpoints, but the discussion has strayed far
from the intent of the original Greek ship problem.  In short, the problem
with "essences" and "souls" are the questions posed, and not the answers
given.

        We are concerned with creating intelligent machines (whether we consider
it "artificial" or "authentic").  The "problem" of "essence" is only caused
by the necessity that a hard-and-fast, black-and-white discrimination is being
asked whether "The reassembled ship is 'essentially' the same."  It should be
clear that the question phrased as such cannot be answered adequately because
it is not relevant.  You can say "it looks the same," "it weighs the same," "it
has the same components," but how useful is it for the purposes of an
intelligent machine (or person) to know whether it is "essentially" the same
ship?  The field of AI is so young that we do not even have a decent method of
determining that it even IS a Greek ship.  Before we attempt to incorporate
such philosophical determinations in a machine, wouldn't it be more useful
to solve the more pressing problem of object identification before problems of
esoteric object distinctions are examined?
        The problem of "souls" is also not relevant to the study of AI (though
it is undoubtedly of great import to our understanding of our role as humans
in the universe).  A "soul," like the concept of "essence," is undefinable.
The problem of "cognition" is far more relevent to the study of AI because
it can be defined within some domain; it is the object oriented interpretation
of some phenomena (e.g., vision, auditory, context, etc.).  Whether "cognition"
constitutes a "soul" is again not relevent.  The more pressing problem is
the problem of creating a sufficient "cognitive" machine that can make
object-oriented interpretations of sensory data and contextual information.
While the question of whether a "soul" falls out of this mechanism may be
be of philosophical interest, it moves us no closer to the description of
such a mechanism.

                        Another writer's opinion,
                        P.K.

------------------------------

Date: 3 Jun 84 12:24:57-PDT (Sun)
From: decvax!cwruecmp!borgia @ Ucb-Vax.arpa
Subject: Re: Essences and soul
Article-I.D.: cwruecmp.1173

** This is somewhat long ...
   You might learn something new ...
   ... from Intellectuals Anonymous (IA not AI)
**
A few years ago, I became acquainted with an international group called
Community International that operates through a technique called Guided
Experiences to assist individuals in their progress towards self
actualization. I remember that some of the techniques like Dis-tension,
and the Experience of Peace were so effective that the Gurus in the
group were sought by major corporations for their Executive Development
programs. The Community itself is a non-profit, self-sustaining
organization that originated somewhere in South America.

The Community had a very interesting (scientific?) model for the body
and soul (existence and essence) problem. The model is based on levels
or Centers for the Mind.

I will summarize what I remember about the Centers of the Mind.

1. The major Centers of the Mind are the Physiological Center, the Motor
Center, the Emotional Center, and the Intellectual Center.

2. The functional parts of the Mind belong to different (matrix) cells
in a tabulation of major Center X major Center.

To illustrate the power of this abstraction, consider the following
assertions where the loaded words have the usual meaning.

The intellectual part of the intellectual center deals with reason or
cognition. The rationalist AI persons must already feel very small.
Reliance on reason alone indicates a poverty of the mind!

The motor part of the intellectual center deals with imagination and
creativity. The emotional part of the intellectual center deals with
intuition.

Similarly the motor center has intellectual, emotional and motor
parts that control functions like learning to walk, the Olympics, and
reflexes.

The emotional center has intellectual, emotional, and motor parts that
control faith and beliefs, the usual emotions like fear, anger, joy etc.
and stuff like euphoria, erotica.

The Physiological center is unfortunately the least understood. The
center controls the survival drives for food, sex, safety etc.
(And I believe, rational economic behaviour, free markets etc.)

The thesis is that the lower centers (Physiological) must be developed
before the higher centers can be productive. This must seem obvious
since we don't expect a starving man to cry out with joy, or an
emotionally disturbed person to reason effectively.

************************************************************************
I would appreciate any comments, anonymous or otherwise. Does this make
any sense to you? Does this change your picture of your own mind?
************************************************************************

------------------------------

Date: Mon, 4 Jun 84 17:07:34 PDT
From: Joe Halpern <HALPERN%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminars - Knowledge Representation

    [Forwarded from the IBM/Halpern distribution by Laws@SRI-AI.]

The knowledge seminar will be meeting again at 10 AM, Friday, June 8,
in Auditorium A of Building 28 at IBM.  This week Joe Karnicky will
speak on "Knowledge Representation and Manipulation in an Expert System"
and I will speak on work in progress entitled "Towards a Theory of
Knowledge and Ignorance".  I have appended the abstracts
below.  I have speakers lined up for three more sessions, which will
be  held June 22, July 6, and July 20.  After that the seminar will stop,
unless we can find some more volunteers to speak.  As you can see by my
talk, discussing work in progress is perfectly reasonable, as is talking
about research other than your own.  If you have any suggestions for
speakers, or directions the seminar might take, please let me know.

10 AM -- Knowledge Representation and Manipulation in an Expert System
         Joe Karnicky, Varian Systems and Techniques Lab (Palo Alto)

We are constructing an expert advisory system for chromatography,
i.e. a computer program which is to perform as an advisor to analytical
chemists (chromatographers) with functionality on the level of human experts.
One of the most important considerations in the design of such a program
is the choice of  techniques for the representation and manipulation of
the knowledge in the system.    I will discuss these choices of knowledge
representation, the results we have achieved, and the advantages
and disadvantages we have discovered.
    The techniques to be discussed include:
  PREDICATE LOGIC-inference by a prologue-type interpreter (backward chaining
+ unification) modified to include certainty factors and predicates to be
evaluated outside of the rule base.
  PRODUCTION SYSTEMS-collections of situation-action (if...,then...)rules.
  FRAMES-heirarchically related data structures.
  PROCEDURES- small programs for specific tasks in specific situations.
  ANALOG REPRESENTATIONS-in this case, a detector's output signal vs. time.

11 AM. -- Towards a Theory of Knowledge and Ignorance
          Joe Halpern, IBM Research

Suppose you only have partial information about a particular domain.
What can you be said to know in that case?  This turns out to be a
surprisingly tricky question to answer, especially if we assume that
you have introspective knowledge about your knowledge.  In particular,
you know far more than the logical consequences of your information.
For example, if my partial information does not tell me anything about
the price of tea in China, then I know I don't know anything about the
price of tea in China.  Moreover, I know that no one else knows that
I know the price of tea in China (since in fact I don't).  Yet this
knowledge is not a logical consequence of my information, which
doesn't mention the price of tea in China at all!

I will discuss the problem of characterizing an agent's state of
when s/he has partial information, and give such a characterization
in both the single agent and multi-agent case.  The multi-agent
case turns out to be much harder than the single agent case, and
we're still not quite sure that we have the right characterization
there.  I will also try to relate this work to results of Konolige,
Moore, and Stark, on non-monotonic logic and circumscriptive ignorance.

------------------------------

End of AIList Digest
********************

∂05-Jun-84  2249	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #70
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 Jun 84  22:48:29 PDT
Date: Tue  5 Jun 1984 21:36-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #70
To: AIList@SRI-AI


AIList Digest           Wednesday, 6 Jun 1984      Volume 2 : Issue 70

Today's Topics:
  Games - Computer War Games Request,
  AI Tools - Stanford Computer Plans,
  Scientific Method - Hardware Prototyping,
  Seminar - Expert System for Maintenance
----------------------------------------------------------------------

Date: 1 Jun 84 13:22:15-PDT (Fri)
From: hplabs!intelca!cem @ Ucb-Vax.arpa
Subject: Computer War Games
Article-I.D.: intelca.287

This may be a rather simple problem, but it least it has no philosophical
ramifications.

I am developing a game that plays very similarly to the standard combat
situation type games that Avalon Hill is famous for. Basically, it has
various pieces of hardware, such as battleships, aircraft carriers,
destroyers, transports, tanks, armies, various aircraft, etc. and the
purpose is to build a fighting force using captured cities and defeat
the opposing force. It is fairly simple to make the computer a "game
board" however I would also like it to be at least one of the opponents
also. So I need some pointers on how to make the program smart enough
to play a decent game. I suspect there will be some similarities to
chess since it to is essentially a war game. The abilities I hope to
endow my computer with are those of building a defense, initiating an
offense, and a certain amount of learnablity. Ok world, what text
or tome describes techniques to do this ? I have a book on "gaming
theory" that is nearly useless, I suspect.  One that was a little more
practical and less "and this is the proof ...", 10 pages later the
next sentence begins. Maybe something like Newman and Sproul's graphics
text but for AI.

                             --Chuck McManis


        ihnp4!               Disclaimer : All opinions expressed herein are my
             \                            own and not those of my employer, my
              dual!    proper!            friends, or my avacado plant.
             /    \   /
      fortune!     \ /
                    X--------> intelca!cem
     ucbvax!       / \
           \      /   \
            hplabs!    rocks34!      ARPAnet : "hplabs!intelca!cem"@Berkeley
           /
        hao!

------------------------------

Date: Fri 1 Jun 84 15:17:06-PDT
From: Mark Crispin <MRC@SU-SCORE.ARPA>
Subject: Stanford University News Service press release

     [Forwarded from the Stanford bboard by CC.Clive@UTEXAS-20.]
    [Forwarded from the UTexas-20 bboard by CMP.Werner@UTEXAS-20.]

                STANFORD UNIVERSITY NEWS SERVICE
                   STANFORD, CALIFORNIA 94305
                         (415) 497-2558

FOR INFORMATION CONTACT: Joel Shurkin
FOR IMMEDIATE RELEASE

    STANFORD COMMISSIONS COMPUTER TO REPLACE LARGE DEC-20'S.

STANFORD--
        Stanford University is negotiating with a small Silicon
Valley company to build large computers to replace the ubiquitous
DECSYSTEM-20s now ``orphaned'' by their manufacturer, Digital
Equipment Corp. (DEC).

        The proposed contract, which would total around $1.4
million, would commision two machines from Foonly Inc. of
Mountain View for delivery early in 1986.  Foonly is owned by
former Stanford student David Poole.

        According to Len Bosack, director of the Computer Science
Department's Computer Facilities, the Foonly F1B computer system
is about four times faster than the DEC model 2060 and 10 times
faster when doing floating-point computations (where the decimal
point need not be in the same place in each of the numbers
calculated) that are characteristic of large-scale engineering
and scientific problems.

        Ralph Gorin, director of Stanford's Low Overhead Time
Sharing (LOTS) Facility -- the academic computer center -- said
the Foonly F1B system, which is totally compatible with the
DEC-20, is an outgrowth of design work done by Poole and others
while at the Stanford Artificial Intelligence Laboratory.

        Since 1977, Foonly has built one large system, the F1,
and several dozen smaller systems.  The Foonly F1B is a
descendant of the original F1, with changes reflecting advances
in integrated circuit technology and the architectural
refinements (internal design) of the latest DEC-20s.

        A spokesman for DEC said the company announced last year
it had discontinued work on a successor to the DEC-20, code named
``Jupiter,'' and would continue to sell enhanced versions of the
large mainframe.  Service on the machines was promised for the
next ten years.

        However, said Sandra Lerner, director of the Computing
Facilities at the Graduate School of Business, the
discontinuation of DEC-20 development left approximately 1,000
customers world-wide without a practicable ``growth path.''

        Ten DECSYSTEM-20 computers on campus make that machine the
most numerous large system at Stanford.

        The Graduate School of Business uses its two DEC-20s for
administration, coursework, and research.  The Computer Science
Department uses two systems for research and administration.
LOTS, the academic computer facility, supports instruction and
unsponsored research on three systems and hopes to add one more
in the time before the F1B is available.

        Other DEC-20s are at the Department of Electrical
Engineering, the artifical intelligence project at the Medical
Center (SUMEX), and the recently formed Center for the Study of
Language and Information (CSLI).

        The Stanford University Network (SUNet), the main
university computer communications network, links together the
10 DEC-20s, approximately 30 mid-size computers, about 100
high-performance workstations, and nearly 400 terminals and
personal computers.

        The DEC-20 has been a cornerstone of research in artificial
intelligence (AI).  Most of the large AI systems evolved on the
DEC-20 and its predecessors.  For this reason, Stanford and other
computer science centers depend on these systems for their
on-going research.

        Lerner said the alternative to the new systems would
entail prohibitive expense to change all programs accumulated
over nearly twenty years at Stanford and to retrain several
thousand student, faculty, and staff users of these systems.  The
acquisition of the Foonly systems would be a deliberate effort to
preserve these university investments.

6-1-84                        -30-                         JNS3A

EDITORS: Lerner may be reached at (415) 497-9717, Gorin at
497-3236, and Bosack at 497-0445.

------------------------------

Date: Mon 4 Jun 84 22:22:51-EDT
From: David Shaw <DAVID@COLUMBIA-20.ARPA>
Subject: Correcting Stone's Mosaic comments

Reluctant as I am to engage in a computer-mediated professional spat, it is
clear that I can no longer let the inaccuracies suggested by Harold Stone's
Mosaic quote go uncorrected.  During the past two weeks, I've been
inundated with computer mail asking me to clarify the issues he raised.  In
my last message, I tried to characterize what I saw as the basic
philosophical differences underlying Harold's attacks on our research.
Upon reading John Nagle's last message, however, it has become clear to me
that it is more important to first straighten out the surface facts.

First, I should emphasize that I do not in any way hold John Nagle
responsible for propagating these inaccuracies.  Nagle interpreted Stone's
remarks in Mosaic exactly as I would have, and was careful to add an
"according to the writer quoted" clause in just the right place.  I also
agree with Nagle that Stone's observations would have been of interest to
the AI community, had they been true, and thus can not object to his
decision to circulate them over the ARPANET.  As it happens, though, the
obvious interpretation of Stone's published remarks, as both Nagle and I
interpreted them, were, quite simply, counterfactual.

Nagle interpreted Stone's remarks, as I did, to imply that (in Nagle's
words) "NON-VON's 1 to 3 are either unfinished or were never started."
(Stone's exact words were "Why is there a third revision when the first
machine wasn't finished?")  In fact, a minimal (3 processing element)
NON-VON 1 has already been completed and thoroughly tested.  The custom IC
on which it is based has been extensively tested, and has proved to be 100%
functional.  Construction of a non-trivial (though, at 128 PE's, still
quite small) NON-VON 1 machine awaits only the receipt from ARPA's MOSIS
system of enough chips to build a working prototype.  If MOSIS is in fact
able to deliver these parts according to the estimated timetable they have
given us, we should be able to demonstrate operation of the 128-node
prototype before our original milestone date of 12/84.

In fact, we have proceeded with all implementation efforts for which we
have received funding, have developed and tested working chips in an
unusually short period of time, and have met each and every one of our
project milestones without a single schedule overrun.  When the editors of
Mosaic sent me a draft copy of the text of their article for my review, I
called Stone, and left a message on his answering device suggesting that
(even if he was not aware of, did not understand, or had some principled
objection to our phased development strategy) he might want to change the
words "wasn't finished" to "hasn't yet been finished" in the interest of
factual accuracy.  He never returned my call, and apparently never
contacted Mosaic to correct these inaccuracies.

For the record, let me try to explain why NON-VON has so many numbers
attached to its name.  NON-VON 2 was a (successful) "paper-and-pencil"
exercise intended to explore the conceptual boundaries of SIMD vs. MIMD
execution in massively parallel machines.  As we have emphasized both in
publications and in public talks, this architecture was never slated for
physical implementation.  To be fair to Stone, he never explicitly said
that it was.  Still, I (along with Nagle and others who have since
communicated with me) felt that Stone's remarks SUGGESTED that NON-VON 2
provided further evidence that we were continually changing our mind about
what we wanted to build, and abandoning our efforts in midstream.  This is
not true.

NON-VON 3, on the other hand, was in fact proposed for actual
implementation.  Although we have not yet received funding to build a
working prototype, and will probably not "freeze" its detailed design for
some months, considerable progress has been made in a tentative design and
layout for a NON-VON 3 chip containing eight 8-bit PE's.  The NON-VON 3 PE
is based on the same general architectural principles as the working
NON-VON 1 PE, but incorporates a number of improvements derived from
detailed area, timing, and electrical measurements we have obtained from
the NON-VON 1 chip.  In addition, we are incorporating a few features that
were considered for implementation in NON-VON 1, but were deemed too
complex for inclusion in the first custom chip to be produced at Columbia.

While we still expect to learn a great deal from the construction of a
128-node NON-VON 1 prototype, the results we have obtained in constructing
the NON-VON 1 chip have already paid impressive dividends in guiding our
design for NON-VON 3, and in increasing the probability of obtaining a
working, high-performance, 65,000-transistor chip within the foreseeable
future.  Based on his comments, I can only assume that, in my position,
Stone would have attempted to jump directly from an "armchair design" to a
working, highly optimized 65,000-transistor nMOS chip without wasting any
silicon on interim experimentation.  This strategy has two major drawbacks:

1.  It tends to result in architectures that micro-optimize (in both the
area and time dimensions) things that ultimately don't turn out to make
much difference, at the expense of things that do.

2.  It often seems to result in chips that never work.  Even when they do,
the total expenditure for development, measured in either calendar months,
designer-months, or fabrication costs, is typically far larger than is the
case with a phased strategy employing carefully selected elements of
"bottom-up" experimentation.

Finally, let again state my view that one of the essential characteristics
of the emerging paradigm for experimental research in the field of
nonstandard architectures is the development of "non-optimal" machines that
nonetheless clearly explicate and test new architectural ideas.  Even in
NON-VON 3, we have not attempted to embody all of (or even all of the most
important) architectural features that we believe will ultimately prove
important in massively parallel machines.

By way of illustration, we have thus far limited the scope of our
experimental work to very fine grain SIMD machines supporting only a single
physical PE interconnection scheme.  This is not because we believe that
the future of computation lies in the construction of such machines.  On
the contrary, I am personally convinced that, if massively parallel
machines ever do find extensive use in practical applications (and, in my
view, it is too early to predict whether they will), they are almost
certain to exhibit heterogeneity in all three dimensions (granularity,
synchrony and topology).

Ultimately, we hope to broaden the scope of the NON-VON project to consider
the opportunities and problems associated with more than one class of
processing element, multiple-SIMD, as opposed to strictly SIMD, execution
schemes, and the inclusion of additional communication links.  In the
context of a research (as opposed to a development) effort, however, it
often seems to be more productive to explore a few mechanisms in some
detail than incorporate within the first architectural experiment all
features that seem like they might ultimately come in handy.

The NON-VON 1 prototype, along with our proposed NON-VON 3 machine,
exemplify this approach to experimental research in computer architecture.
Until we lose interest in the problems of massively parallel computation,
or run out of either unresolved questions or the funding to answer them, we
are likely to stick to our current research strategy, which is based in
part on the implementation of experimental hardware in multiple, partially
overlapped stages.  Although I know this will upset Harold, there may thus
someday be a NON-VON 4, a NON-VON 5, and possibly even a NON-VON 6.  Some
of these later successors may never get past the stage of educated doodles,
while others may yield only concrete evidence of the shortcomings of some
of our favorite architectural ideas.

I believe it to be characteristic of the paradigm shift to which I referred
in my last message that the very strategy to which we attribute much of our
success is casually dismissed by Stone as evidence of indecison and
failure.  As decreasing IC device dimensions and the availability of
rapid-turnaround VLSI facilities combine to significantly expand the
possibilities for experimental research on computer architectures, it may
be useful to take a fresh look our criteria for evaluating research methods
and research results in this area.

David


P.S.  For those who may be interested, a more detailed explanation of the
rationale behind our plan for the phased development of NON-VON prototypes
is outlined in a paper presented at COMPCON '84.  This paper was not,
however, available to Stone at the time his remarks were quoted in Mosaic;
in general, our failure to promptly publish papers describing our work is
probably the source of much legitimate criticism of the NON-VON project.

------------------------------

Date: 29 May 1984 16:59-EDT
From: DISRAEL at BBNG.ARPA
Subject: Seminar - Expert System for Maintenance

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


There will be a seminar on Thursday, June 7th at 10:30 in the 2nd floor
large conference room.  The speaker will be Gregg Vesonder of Bell
Labs.

           ACE: An Expert System for Telephone Cable Maintenance

                        Gregg T. Vesonder

                        Bell Laboratories
                          Whippany, NJ

As more of the record keeping and monitoring functions of the local
telephone network are automated, there is an increasing burden on the
network staff to analyze the information generated by these systems.
An expert system called ACE (Automated Cable Expertise) was developed
to help the staff manage this information.  ACE analyzes the
information by using the same rules and procedures that a human analyst
uses. Standard knowledge engineering techniques were used to acquire
the expert knowledge and to incorportae that knowledge into ACE's
knowledge base.  The most significant departure from "standard" expert
system architecture was ACE's use of a conventional data base
management system as its primary source of information.  Our experience
with building and deploying ACE has shown that the technology of expert
systems can be useful in a variety of business data processing
environments.

------------------------------

End of AIList Digest
********************

∂06-Jun-84  2238	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #71
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Jun 84  22:37:53 PDT
Date: Wed  6 Jun 1984 21:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #71
To: AIList@SRI-AI


AIList Digest            Thursday, 7 Jun 1984      Volume 2 : Issue 71

Today's Topics:
  Games & Expert Systems - Source Information,
  AI Programming - Definitions,
  Expert Systems - MYCIN Demo,
  Humor - Turing Machine,
  AI Contracts - Automated Classification and Retrieval,
  Seminar - Programming by Example,
  Conferences - Approximately Solved Problems
----------------------------------------------------------------------

Date: 6 Jun 1984 13:12:28 EDT
From: Perry W. Thorndyke <THORNDYKE@USC-ISI>
Subject: computer war games

Reply to Chuck McManis's request for information on war games:

There are literally hundreds of programs, written in a variety of languagues
for a variety of machines, that support battle simulation or war gaming.
A catalog of these is published annually and is available under the title
"Catalog of Wargaming and Military Simulation Models" from Studies,
Analysis, and Gaming Agency; Organization of the Joint Chiefs of Staff;
The Pentagon; Washington, D.C.

Few, if any, of the systems described in the catalog provide "intelligent"
simulation of opponent behavior.  One reason for this is that there exists
no articulated model for expertise in tactical planning and decision making.
We at Perceptronics are developing a Navy tactical battle game with an
automated opponent based on a cognitive model of tactics.  The project is a
vehicle to explore (1) development of an expert model of time-stressed
tactical decision making, (2) development of an instructional system to
teach these skills to a novice, (3) automating an adaptive, intelligent
opponent using the expert model, and (4) making the opponent behavior
modifiable under program control of the instructional system to achieve
pedagogical objectives.  A technical report is due out soon; if you are
interested, send your address and I'll add you to the mailing list.

Perry Thorndyke
Perceptronics, Inc.
545 Middlefield Road
Menlo Park, CA 94025
(415) 321-4901

thorndyke@usc-isi

------------------------------

Date: 6 Jun 84 16:34:24 EDT (Wednesday)
From: Chris Heiny <Heiny.henr@XEROX.ARPA>
Subject: Re: Computer Wargames

Sounds like you've got your work cut out.  It'll probably be
considerably more complex than a chess player, because chess is the
simplest of wargames (I choose to ignore checkers): 2 players with 32
counters (of 6 types) on a 64 space board, with relatively limited
connections (4) per space.  More complex wargames have more players,
with hundreds of counters (of many more than 6 types) on a board with
thousands of spaces, each space usually connnecting to 6 others.  The
rules are vastly more complex as well.

The project sounds pretty interesting though, and I'll be glad to lend
what aid I can from this distance.


                                Chris

------------------------------

Date: Sun, 3 Jun 84 15:07 PDT
From: Brian Reid <reid@Glacier>
Subject: AI programs: a definition

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


An AI program is a program written by a person who fervently believes that
he is doing AI as he writes the program. Mere belief is not sufficient; it
must be zealous belief.

------------------------------

Date: Sun, 3 Jun 84 16:18 PDT
From: Mark Kent <kent@Navajo>
Subject: AI programs: addition to a definition

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


In addition to the definition given by reid@Glacier:

An AI program is a program in which at least one of the important
subproblems that needs solving is solved by a brute force method.

------------------------------

Date: Sun 3 Jun 84 20:23:27-PDT
From: Bruce Buchanan  <BUCHANAN@SUMEX-AIM.ARPA>
Subject: AI Program Demo

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

[The following is part of an exchange of messages about the percentage
of graduating AI students who have been exposed to actual AI program
demos.  I have edited it slightly. -- KIL]


MYCIN is available on SUMEX from the guest account -- remember that
large jobs are slow during the day.  Once on Sumex, type

        <MYCIN>MYCIN

to the Exec and read the help options.  If you don't know much
medicine it might be a good idea to run a library case first.
You should not need someone else to demo it for you, but there
are still people around who worked on MYCIN when it was an active
project if you need help.

The password for the Sumex guest acct is available from RYALLS @ SUMEX.

A caveat: the medical knowledge base has not been updated
in the past several years to reflect knowledge of new drugs or
improved therapies.

bgb

------------------------------

Date: Mon 4 Jun 84 08:46:07-PDT
From: Bud Spurgeon <SPURGEON@SU-CSLI.ARPA>
Subject: Re: Have you seen?

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


           How may MTC students have seen a Turing machine?
					-- Moshe Vardi

Our DEC 2060 nicknamed "TURING" is on view daily in the Pine Hall machine
room.
                -Bud :-)
(P.S. We're still looking for a tape cabinet capable of storing infinitely
long tape.)
(P.P.S. Backups on this thing take FOREVER.)

------------------------------

Date: Fri, 1 Jun 84 09:26:42 edt
From: aronson@nlm-mcs (Jules P. Aronson)
Subject: Research Contract

Please distribute the following announcement to Research people in the
fields of AI and Information Science:
    --------------------------------------------------------------

AUTOMATED CLASSIFICATION AND RETRIEVAL PROJECT -- The Lister Hill National
Center for Biomedical Communications, National Library of Medicine, is
developing a research project to investigate, develop, and evaluated
Information Science, Computational Linguistics and Artificial Intelligence
techniques which support the automated classification and retrieval of
biomedical literature.  The project shall include investigations in natural
language understanding, knowledge representation, and information retrieval,
to explore the development of automated systems for identifying,
representing, and retrieving relevant concepts and main ideas from printed
documents.

Written requests for RFP NLM-84-115/PSP, should be addressed to the National
Library of Medicine, Office of Contracts Management, Building 38A, Room
B1N17, 8600 Rockville Pike, Bethesda, Maryland 20209, Attention:  Patricia
Page.  The RFP will be available in approximately 30 days and will close 30
days after it is issued.

------------------------------

Date: Tue 5 Jun 84 23:15:02-EDT
From: JMILLER%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - Programming by Example

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Title: Programming by Example

Speaker: Dan Halbert, University of California, Berkeley,
         and Xerox Corporation, Office Systems Division

Wednesday, June 6, 2pm, AI playroom (8th floor, Tech Square)


Most computer-based applications systems cannot be programmed by their
users. We do not expect the average user of a software system to be able
to program it, because conventional programming is not an easy task.

But ordinary users can program their systems, using a technique called
"programming by example". At its simplest, programming by example is
just recording a sequence of commands to a system, so that the sequence
can be played back at a later time, to do the same or a similar task.
The sequence forms a program. The user writes the program -in the user
interface- of the system, which he already has to know in order to
operate the system. Programming by example is "Do what I did."

A simple program written by example may not be very interesting. I will
show methods for letting the user -generalize- the program so it will
operate on data other than that used in the example, and for adding
control structure to the program.

In this talk, I will describe programming by example, discuss current
and past research in this area, and also describe a particular
implementation of programming by example in a prototype of the Xerox
8010 Star office information system.

------------------------------

Date: Thu 24 May 84 16:29:48-EDT
From: Joseph Traub <TRAUB@COLUMBIA-20.ARPA>
Subject: Call for papers

                           CALL FOR PAPERS

Symposium on Complexity of Approximately Solved Problems

April 17-19, 1985

Computer Science Department
Columbia University
New York, NY  10027

SUPPORT:  This symposium is supported by a grant from the System Development
Foundation.

SCOPE:  This multidisciplinary symposium focuses on problems which are
approximately solved and for which optimal algorithms or complexity results
are available.  Of particular interest are distributed systems, where
limitations on information flow can cause uncertainty in the approximate
solution of problems.  The following is a partial list of topics:  distributed
computation, approximate solution of hard problems, applied mathematics,
signal processing, numerical analysis, computer vision, remote sensing,
fusion of information, prediction, estimation, control, decision theory,
mathematical economics, optimal recovery, seismology, information theory,
design of experiments, stochastic scheduling.

INVITED SPEAKERS:  The following is a list of invited speakers.

L. Blum, Mills College                  J. Halpern, IBM
L. Hurwicz, University of Minnesota     D. Johnson, AT&T - Bell Laboratories
J. Kadane, Carnegie-Mellon University   R. Karp, Berkeley
H.T. Kung, Carnegie-Mellon University   D. Lee, Columbia University
M. Milanese, Politecnico di Torino      C.H. Papadimitriou, Stanford University
J. Pearl, UCLA                          M. Rabin, Harvard University and
                                                  Hebrew University
S. Reiter, Northwestern University      A. Schonhage, University of Tubingen
K. Sikorski, Columbia University        S. Smale, Berkeley
J.F. Traub, Columbia University         G. Wasilkowski, Columbia University
                                            and University of Warsaw
A.G. Werschulz, Fordham University      H. Wozniakowski, Columbia University
                                            and University of Warsaw


CONTRIBUTED PAPERS:  All appropriate papers for which abstracts are contributed
will be scheduled.  To contribute a paper send title, author, affiliation, and
abstract on one side of a single 8 1/2 by 11 sheet of paper.


          TITLES AND ABSTRACTS MUST BE RECEIVED BY JANUARY 15, 1985


PUBLICATION:  Invited papers will be published.

REGISTRATION:  The symposium will be held in the Kellogg Conference Center on
the Fifteenth Floor of the International Affairs Building, 118th Street and
Amsterdam Avenue.  The conference schedule and paper abstracts will be
available at the registration desk.  Registration will start at 9:00 a.m.
There is no registration charge.

FOR FURTHER INFORMATION:  The program schedule for invited and contributed
papers will be mailed by about March 15 only to those responding to this
Call for Papers.  If you have any questions, contact TRAUB@Columbia-20.ARPA.

To help us plan for the symposium please send the following information to
NG@Columbia-20.ARPA.


Name: ←←←←←←←←←←←←←←←←←←←←←←←← Affiliation: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

Address: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

City: ←←←←←←←←←←←←←←←←←←← State: ←←←←←←←←←←←←←←←←←←←←← Zip: ←←←←←←←←←←←←←←←


( ) I will attend the Complexity Symposium.

( ) I may contribute a paper.

( ) I may not attend, but please send program.

------------------------------

End of AIList Digest
********************

∂10-Jun-84  1607	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #72
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Jun 84  16:05:14 PDT
Date: Sun 10 Jun 1984 14:55-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #72
To: AIList@SRI-AI


AIList Digest            Sunday, 10 Jun 1984       Volume 2 : Issue 72

Today's Topics:
  Linguistics - Name Grammar Request,
  Planning - Multi-Agents and Complex World Models,
  Courses - Expert Systems,
  Perception & Philosophy - Cross-Time Identity,
  Scientific Method - Mathematics,
  Logic - Logic and AI at U. Maryland,
  AI Societies & Periodicals - Canadian AI Newsletter
----------------------------------------------------------------------

Date: Wed 6 Jun 84 08:10:26-PDT
From: TEX82@SRI-AI.ARPA
Subject: Names

BibTeX, LaTeX's bibliography lookup program, needs:

  * a grammar of author names--that is, a BNF specification of
    the components of a name, and

  * a specification of how to print a name, in various styles, given
    its parse tree.

Possible style choices for names include last name first or last and,
perhaps, complete first/middle names or initials.

The rules should handle almost all cases encountered in technical
literature, including  'Brinch Hansen, Per'  and  'Jean-Pierre van der
Waerden, Jr.'  but need not cover cases like  'John Thompson, Earl of
Rumford'.  The grammar need not be logically complete; for example, it
would be all right to consider  'Colonel'  to be the first name of
'Colonel John Blimp', if that produces the correct printed version.

Please contact me if you know of anything like this.

Leslie Lamport


[Please forward this to anyone who might have an answer.  Leslie has
been doing a great job building the LaTeX friendly user interface to
TeX, and a great many of us can benefit from any increased functionality
he can develop for the bibliography preprocessor.  -- KIL]

------------------------------

Date: Sat, 9 Jun 84 20:41 EDT
From: THE DESK (terminal)OF <Gangel%upenn.csnet@csnet-relay.arpa>
Subject: multi-agents and complex world models

      There are many planning systems using multi-agents and temporal
 constraints. But the domains for most of these systems are limited
 to only a very simplistic world model. The system we are working on
 involves a complex graphic display of the inside of NASA's space lab
 (within the space shuttle). There are many complex objects and
 multi-agents to contend with to provide a true simulation of even a
 simple command.

      Hendrix's model shows an interesting world model for a simple
 scenario, but without a sophisticated planner. There must be further
 research in such "robot-like" worlds and if so, I would greatly
 appreciate any pointers toward articles/papers/books dealing with
 such complex world models and planning systems.

                                Thank you,

                                Jeffrey S. Gangel
                                [ Gangel%upenn.csnet@csnet-relay.arpa ]
                                Dept. of Computer and Information Science
                                Moore School
                                University of Pennsylvania
                                Philadelphia, PA 19104

------------------------------

Date: 7 Jun 84 10:32:19 PDT (Thursday)
From: Isdale.es@XEROX.ARPA
Subject: Course using *Building Expert Systems* (Hayes-Roth,Waterman,Lenat)


In response to a request for information on courses using the
text *Building Expert Systems* by R. Hayes-Roth, D. Waterman, and D.
Lenat (Addison Wesley, 1983):

UCLA Extension Offered such a class this past spring: Developing Expert
Systems.  Instructor: Dr. Douglas R. Partridge (works for one of the
defense/aerospace contractors in the LA area.)

The course was taught as a lecture-seminar w/demonstrations & code
walkthroughs.
Both LISP and PROLOG methods were discussed. A major portion of the
grading depended on a term project. The class was expanded from a
seminar given by Dr. Partridge for the Technology Transfer Society.

The prospectus I have is 3pg and too long for the digest.  I will forward
it on request but suggest calling the extension at (213) 825-3985 for
more up-to-date information.

J.B. Isdale
(Isdale.es@XEROX.ARPA)

------------------------------

Date: 6 Jun 84 6:00:08-PDT (Wed)
From: hplabs!hao!seismo!rochester!rocksvax!sunybcs!gloria!colonel @
      Ucb-Vax.arpa
Subject: Watch out for that tree
Article-I.D.: gloria.220

It's the computer's own fault for using human-range vision.  Infra-red
would have revealed the cardboard tree.  "Take these broken wings ... "

Col. G. L. Sicherman
...seismo!rochester!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 6 Jun 84 6:07:27-PDT (Wed)
From: hplabs!hao!seismo!rochester!rocksvax!sunybcs!gloria!colonel @
      Ucb-Vax.arpa
Subject: Re: cross-time identity.
Article-I.D.: gloria.221

This problem also arises in databases.  How do you find out whether the
Joe Szmoe in your tax database is the same as the Joe Szmoe in your
welfare database?  SSNs don't count - he may have several.

The problem is even worse when you pass from Artificial Intelligence
to Military Intelligence.  You may know nothing for certain about
enemy spies, and can only suspect that two spies are identical.

Col. G. L. Sicherman
...seismo!rochester!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: Sun, 10 Jun 84 9:47:53 EDT
From: Stephen Wolff <steve@Brl-Bmd.ARPA>
Subject: Mathematical Methods

Not at all deep; maybe others will find our gropings briefly amusing .....

    Date:     Fri, 8 Jun 84 11:19:30 EDT
    From:     Brint <abc@BRL-TGR.ARPA>

"The usual attitude of mathematicians is reflected in their published
research papers and in mathematics textbooks.  Proofs are revamped and
polished until all trace of how they were discovered is completely
hidden.  The reader is left to assume that the proof came to the originator
in a blinding flash, since it contains steps which no one could possibly
have guessed would succeed.  The painstaking process of trial and error,
revision and adjustment are all invisible."

Alan Bundy


    From:     Stephen Wolff <steve@BRL-BMD.ARPA>

I have the greatest respect for Alan Bundy, and I agree with his words.  I
shall however adamantly disagree with his (or anyone's) implication that

"The painstaking process of trial and error, revision and adjustment....."

should NOT be invisible -- in a MATHEMATICS paper.  The purpose of such a
paper MUST be FIRST to advance knowledge; proofs MUST be as spare, concise
and lucid as it is within the author's talent to make them -- for sloppy or
wordy proofs are just that much harder to verify.  And, indeed, the paper is
diminished to PRECISELY the extent that the author's trials and fumbles are
displayed -- for they may prejudice the world-view of a reader and lead him
to the same (POSSIBLY erroneous) result.

If you say that there are too few (maybe no) places to publish mathematicians'
thought processes, methods of hypothesis, &c., then I shall agree.  And,
further, state my belief that UNTIL we are able to read how both successful
and unsuccessful mathematicians derive the objects of their study, then all
successful efforts at automated reasoning will be just blind beginners' luck.


    From:     Paul Broome <broome@BRL-TGR.ARPA>

Bundy was not implying that the dead end paths in the search for a proof
should be in the paper that publishes the proof.  Just before the portion
that Brint quoted, he discussed Polya's books, "How to Solve It" and
"Mathematical Discovery" and introduced the paragraph containing the
aforementioned quote with, "Polya's attitude in trying to understand the
'mysterious' aspects of problem solving is all too rare."  His next
paragraph begins with "The only attempt, of which I am aware, to explain
the process by which a proof was constructed, is B.L. van der Waerden's
paper, 'How the proof of Baudet's conjecture was found', .."

He's giving motivation for a book on the modeling of mathematical reasoning.


    From:     Brint <abc@BRL-TGR.ARPA>

Perhaps, as in so many endeavors, several bright people actually
agree:

        1. Mathematics papers are not the place for discussing
trial←and←error, inspirational flashes, false starts, and other
means for "discovering" truth and error.

        2. Forums are needed for the discussion of such ideas in
order to advance our understanding of the process at least toward
the end of improving mathematical reasoning by computer.

        3. In some limited way, such forums exist.  We need to
encourage and motivate our mathematicians to contribute to them.

Brint

------------------------------

Date: 8 Jun 84 16:57:32 EDT  (Fri)
From: JACK MINKER <minker@umcp-cs.arpa>
Subject: LOGIC and its ROLE in AI


                        SPECIAL YEAR
                             IN
                     MATHEMATICAL LOGIC
                            AND
                THEORETICAL COMPUTER SCIENCE


     Each year the Mathematics Department of the  University
of  Maryland  devotes  its attention to a special topic.  In
conjunction with the Department  of  Computer  Science,  the
1984-1985  academic  year  will  be  devoted to the topic of
mathematical logic and theoretical  computer  science.   The
year  will  consist  of eight sessions devoted to particular
areas.  The time-table that has evolved is given below.

     As will be noted, the week of October 22-26, 1984, will
be  devoted  to  issues  in LOGIC and its ROLE in ARTIFICIAL
INTELLIGENCE with emphasis on knowledge representation, com-
mon  sense reasoning, non-monotonic reasoning and logic pro-
gramming.

     The lectures will be open to the public.   The  precise
times  and  dates  of  the  lectures for the AI week will be
announced in the next few months.

     We anticipate that there will be modest financial  sup-
port  presumably  for  graduate students and junior faculty.
Applications for support for the week of October 22-26 to be
devoted  to  LOGIC  and  its ROLE in ARTIFICIAL INTELLIGENCE
should be sent to:

       Dr. Jack Minker
       Department of Computer Science
       University of Maryland
       College Park, Maryland
       20742
       minker@umcp-cs
       (301) 454-6119

Kindly send a letter including a vitae, a  statement  as  to
the  importance of these issues to your research, the number
of days you might like to attend, and the amount of  support
that you might require.  We emphasize that we do not know if
we will have funds and even  assuming  they  are  available,
they  will  be  modest  at best.  You should also notify the
above by sending a message  over  the  net  expressing  your
interest in attending the open sessions.

     Those who plan to come, but require no  financial  sup-
port should also inform us of your intentions so that we may
arrange for an appropriate size lecture hall.

     Those individuals interested in other topics associated
with this Math Year should contact:

       Dr. E.G.K. Lopez - Escobar
       Department of Mathematics
       University of Maryland
       College Park, Maryland
       20742
       egkle@umcp-cs
       (301) 454-3759

and provide the same information as above.


                       TIME SCHEDULE
                            AND
                         LECTURERS

     October 1-5, 1984.  Semantics and Logics of Programs.
     Participants: S. Brookes, D. Kozen,  A.  Meyer,  M.  O'Donnell,
         R. Statman

     October 8-12, 1984. Recursion Theory.
     Participants: R. Book, J. Case, R. Daley, D. Leivant, J. Myhill,
         A. Selman, P. Young

   **October 22-26, 1984. LOGIC and its ROLE  in  ARTIFICIAL INTELLIGENCE
     Participants: J. Barwise, M. van Emden, L. Henschen,  J.  McCarthy,
         R. Reiter

     December 3-7, 1984. Model Theory and Algebra.
     Participants: A. Macintyre, A. Mekler, C. Wood

     March 4-8, 1985.   Automath  and  Automating  Natural Deduction.
     Participants: N.G. DeBruijn, J. Zucker

     March 11-15, 1985.  Stability theory.
     Participants: J. Baldwin, S. Buechler, A. Pillay, C. Steinhorn

     April 22-26, 1985.  Toposes and Model Theory.
     Participants: A. Joyal, F. Lawvere, I. Moerdijk, G. Reyes,
         A. Scendrov

     April 29-May 3,1985. Toposes and Proof Theory.
     Participants: M. Bunge, P. Freyd, M. Makkai, D. Scott, P. Scott

------------------------------

Date: 5 Jun 84 9:00:08-PDT (Tue)
From: ihnp4!alberta!sask!utcsrgv!utai!gh @ Ucb-Vax.arpa
Subject: Canadian A.I. Newsletter -- Call for submissions
Article-I.D.: utai.187

                         ====================
                         Call for submissions
                         ====================
   CANADIAN
             A R T I F I C I A L   I N T E L L I G E N C E
                                                            NEWSLETTER

                      (Published by CSCSI/SCEIO)

The Canadian A.I. Newsletter invites submissions from Canada, the U.S., and the
rest of the world of any item relevant to artificial intelligence:

        -- Articles of general interest.
        -- Abstracts of recent publications, theses, and technical reports.
        -- Descriptions of current research and courses at a given institution.
        -- Reports of recent conferences, workshops and the like.
        -- Announcements of forthcoming conferences and other activities.
        -- Calls for papers.
        -- Book reviews (and books for review).
        -- Announcements of new A.I. companies and products.
        -- Opinions, counterpoints, polemic, and controversy.
        -- Humour, cartoons, artwork.
        -- Advertisements (rates upon request).
        -- Anything else concerned with A.I.

Please send submissions, either physical or electronic, to the editor:
   Graeme Hirst
   Department of Computer Science
   University of Toronto
   Toronto, CANADA  M5S 1A4

   Phone: 416-978-6277/6025
   CSNET: cscsi@toronto                 ARPANET: cscsi.toronto@csnet-relay
   UUCP:  utcsrgv!cscsi (connections to allegra, cornell, decvax, decwrl,
            deepthot, drea, floyd, garfield, hcr, ihnp4, linus, mbcsd,
            mcgill-vision, musocs, qucis, sask, ubc-vision, utzoo, uw-beaver,
            watmath, and many other sites)

                        ------------------------

The Canadian A.I. Newsletter is sent to all members of CSCSI/SCEIO, the
Canadian artificial intelligence society.  To join, write to CIPS (which
administers membership matters for the society) with the appropriate fee and a
covering note.  You need not be Canadian to be a member.
   CIPS
   243 College Street, 5th floor
   Toronto, CANADA  M5T 2Y1
Membership: $10 regular, $5 students (Canadian funds); there is a discount of
$2 for CIPS members.  Payment may be made in U.S. dollars at the current rate
of exchange.

------------------------------

End of AIList Digest
********************

∂15-Jun-84  1345	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #73
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Jun 84  13:42:34 PDT
Date: Fri 15 Jun 1984 11:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #73
To: AIList@SRI-AI


AIList Digest            Friday, 15 Jun 1984       Volume 2 : Issue 73

Today's Topics:
  AI Programming - Definition,
  Scientific Method - Mathematics,
  AI Reports - Recent Titles,
  Forum - Minsky and Asimov at Rensselaerville,
  Seminars - Motion of Objects in Contact & AI and APL &
    Learning Equation Solving Methods,
  Workshops - Expert Systems & Reasoning
----------------------------------------------------------------------

Date: 9 Jun 84 14:06:55-PDT (Sat)
From: hplabs!hao!seismo!cmcl2!floyd!clyde!burl!ulysses!gamma!exodus!dhc
      @ Ucb-Vax.arpa
Subject: Re: Definition of an AI program

  Article-I.D.: exodus.169
  In-Reply-To: Article <581@sri-arpa.UUCP>


How about this:
A program is an AI program if and only if it is written in LISP.

                                David H. Copp

[Or Prolog?  The "if ..." is commonly assumed, but the "only if ..."
seems much too strong.  I currently do list processing in C; while
I don't claim much AI content, I see little difference between the C
code and equivalent algorithms written in LISP.  Bob Amsler has pointed
out to me that spelling correctors are knowledge-based programs capable
of outperforming even intelligent humans; few such programs are written
in AI languages.  -- KIL]

------------------------------

Date: Wednesday, 13-Jun-84 16:33:08-BST
From: BUNDY HPS (on ERCC DEC-10) <Bundy%edxa@ucl-cs.arpa>
Subject: Mathematical Methods

        I support Broome's and Brint's interpretations of what I was
trying to say in my book.  I was not trying to criticise mathematics
papers per se, but to point out that they do not contain some of the
information that AI researchers need for computational modelling and to
make a plea for a forum for such information.

        But let me add a caveat to that.  The proofs in a paper are at
least as important a contribution to mathematics as the theorems they
prove.  Future mathematicians may want to use these proofs as models for
proofs in analogous areas of mathematics (think of diagonalization
arguments, for instance).  So it will improve the MATHEMATICAL content
of the papers if the author points out the structure of the proof and
draws attention to what s/he regards as the key ideas behind the proof.

                Alan Bundy

------------------------------

Date: 12 Jun 84 20:27:32-PDT (Tue)
From: hplabs!hao!seismo!rochester!sher @ Ucb-Vax.arpa
Subject: Re: Mathematical Methods
Article-I.D.: rocheste.7379

Personally,  I have done mathematics up to the beginning graduate level
for various courses.  When I do any difficult piece of mathematics I
find that after the fact I can never remember how I came upon the
proof.  I can reconstruct my steps but the reconstruction has no real
relationship to what I really did.  The sensation of finishing a proof
is highly analogous to waking up from a dream.  This is possibly the
most important reason why I am doing artificial intelligence rather
than mathematics today.  If other real mathematicians also operate in
this manner then it is not surprising that they are reluctant to write
up their reasoning processes.  They literally cannot remember them.
  -David

------------------------------

Date: Sun 10 Jun 84 13:26:20-PDT
From: Chuck Restivo  <Restivo@SU-SCORE>
Subject: LP - Library Update

          [Forwarded from the Prolog digest by Laws@SRI-AI.]

Isaac Balbin and Koenraad Lecot sent a copy of their
useful publication;

"Prolog and Logic Programming Bibliography"

The cost for obtaining your own copy is $5.00 Australian,
and includes the cost of Air Mail.

Contact:
                             Isaac Balbin
                    Department of Computer Science
                            Parkville 3052
                         Melbourne, Australia

Send information regarding new references and errata to

    UUCP:  {decvax}, vax135} !mulga!Isaac
or
    ARPA:  CS.Koen@UCLA-Locus

so the bibliography can be updated regularly, please.

[...]

------------------------------

Date: Wed 13 Jun 84 19:27:42-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: AI Reports

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

                       Partial New Reports List
                   MATH & COMPUTER SCIENCE LIBRARY

                    (From Vol. 6, No. 6, 05/28/84)

The reports listed below are now available for circulation at Stanford.

     019257    Haridi, S. Sahlin, D.*Evaluation of logic programs based on
                 natural deduction.* Royal Inst. of Tech., Stockholm.
                 Telecomm. & Comp. Systems Dept.*TRITA-CS-8305 B.*1983.

     019261    Gendrix, G.G. Lewis, W.H.* Transportable natural language
                 interfaces to databases.* SRI International. A.I. Center.*
                 Tech.Note 228.*1981.

     019263    Walker, D.E. Hobbs, J.R.* Natural language access to
                 medical text.* SRI International. A.I. Center.*Tech.Note
                 240.*1981.

     019264    Pereira, F.* Logic for natural language analysis.* SRI
                 International. A.I. Center.*Tech.Note 275, Ph.D. Thesis.
                 Pereira, F.*1983. (Slightly revised version of a thesis
                 submitted to the Department of Artificial Intelligence,
                 University of Edinburgh for the degree of Doctor of
                 Philosophy).

     019271    Moore, R.C.* Semantical considerations on nonmonotonic
                 logic.* SRI International. A.I. Center.*Tech.Note 284.*
                 1983.

     019272    Uszkoreit, H.*A framework for processing partially free
                 word order.* SRI International. A.I. Center.*Tech.Note
                 285.*1983.

     019277    Warren, D.H.D.* Applied logic - its use and implementation
                 as a programming tool.* SRI International. A.I. Center.*
                 Tech.Note 290, Ph.D. Thesis. Warren, D.H.D.*1983.
                 (Verbatim copy of a thesis submitted to the Department of
                 Artificial Intelligence, University of Edinburgh in 1977
                 for the degree of Doctor of Philosophy).

     019278    Shieber, S.M.* Direct parsing of ID/LP grammars.* SRI
                 International. A.I. Center.*Tech.Note 291R.*1983
                 (revised).

     019279    Grosz, B.J. Joshi, A.K. Weinstein, S.*Providing a unified
                 account of definite noun phrases in discourses.* SRI
                 International. A.I. Center.*Tech.Note 292.*1983.

     019280    Martin, P. Appelt, D. Pereira, F.* Transportability and
                 generality in a natural language interface system.* SRI
                 International. A.I. Center.*Tech.Note 293.*1983.

     019292    Pereira, F.C.N. Warren, D.H.D.* Parsing as deduction.* SRI
                 International. A.I. Center.*Tech.Note 295.*1983.

     019284    Appelt, D.E.* Telegram: a grammar formalism for language
                 planning.* SRI International. A.I. Center.*Tech.Note 297.*
                 1983.

     019290    Appelt, D.* Planning English referring expressions.* SRI
                 International. A.I. Center.*Tech.Note 312.*1983.

     019294    Nilsson, N.J.* Probabilistic logic.* SRI International.
                 A.I. Center.*Tech.Note 321.*1984.

     019308    Meandzija, B.*Automated generation of communication
                 systems.* Southern Methodist U. Comp.Sci. & Eng.Dept.*
                 83-CSE-16.*1983.

     019319    Griswold, R.E.*The implementation of an experimental
                 language for manipulating sequences.* Arizona U.
                 Comp.Sci.Dept.*TR 83-20.*1983.

     019336    Janssens, D. Rozenberg, G.* Graph grammars with node label
                 controlled rewriting and embedding.* Colorado U.
                 Comp.Sci.Dept.*CU-CS-251-83.*1983.

     019368    Koskimies, K.*Extensions of one-pass attribute grammars.*
                 Helsinki U. Comp.Sci.Dept.*Rpt. A-1983-04.*1983.

     019373    Shilcrat, E. Panangaden, P. Henderson, T.*Implementing
                 multi sensor systems in a functional language.*  Utah U.
                 Comp.Sci.Dept.*UUCS-84-001.*1984.

------------------------------

Date: 13-Jun-84 02:36 PDT
From: William Daul  Augmentation Systems Division / MDC <WBD.TYM@OFFICE-2.ARPA>
Subject: AI Forum Set For Aug. 4-8

RENSSELAERVILLE, N.Y. -- Marvin Minsky, a co-founder and member of the MIT
Artificial Intelligence Laboratory, and science fiction writer/scientist Isaac
Asimov will address "Artificial Intelligence: Are We Being Outsmarted?" in an
Aug. 4-8 program at the Rensselaerville Institute located here.

The program will be conducted in the manner of a hearing, and Asimov and Minsky
will be questioned by participants in the AI program.

The cost for the program is $250.  More information can be obtained from
Mary-Ann Ronconi, Public Programs Coordinator, The Rensselaerville Institute,
Rensselartville, N.Y. 12147.

------------------------------

Date: 06/11/84 12:23:47
From: AH
Subject: Seminar - Motion of Objects in Contact

                [Forwarded from the MIT bboard by SASW@MIT-MC.]

                        DATE:   Thursday, June 14, 1984
                        TIME:   Refreshments    3:45PM
                                Lecture         4:00PM
                       PLACE:   NE43-512A


                      "THE MOTION OF OBJECTS IN CONTACT"

                            Professor John Hopcroft
                              Cornell University

There is an increasing use of computers in the design, manufacture and
manipulation of physical objects.  An important aspect of reasoning about such
actions concerns the motion of objects in contact.  The study of problems of
this nature requires not only the ability to represent physical objects but the
development of a framework or theory in which to reason about them.  In this
talk such a development is investigated and a fundamental theorem concerning
the motion of objects in contact is proved.  The simplest form of this theorem
states that if two objects in contact can be moved to another configuration in
which they are in contact, then there is a way to move them from the first
configuration to the second configuration such that the objects remain in
contact throughout the motion.  The obvious applications of this result in
compliant motion and also applications in motion planning are discussed.

HOST:  Professor Silvio Micali

------------------------------

Date: Mon, 11 Jun 84 14:08:17 PDT
From: Philip Westlake <westlake@AEROSPACE>
Subject: Seminar - AI and APL

                       APL Users Group Meeting

The Aerospace Corporation APL Users Group is honored to present:
        Dr. Zdenek V. Jizbz and Ms. Phuong T. Nguyen
        of Chevron Oil Field Research Co., La Habra, California

Speaking on: "Artificial Intelligence and APL with Nested Arrays"

        Wednesday, June 13, 1984
        1:00 pm
        A1/1062
        The Aerospace Corporation

The Chevron Oil Field The Chevron Oil Field Research Company of La Habra,
California has been doing some research on Expert Systems implemented in
nested array APL and they are pleased with the rapid progress that they have
achieved in a relatively short time due to the power of nested array APL.
A vector of nested vectors can be matched to a tree structure.  The utility
of this relationship, however, is relatively limited because nodes and
branches are implicit (not explicit).  By adding a convention similar to that
of polish notation, modes can be made explicit.  A special type of nested
vector called a scalar tree will be defined.  The following powerful
properties of scalar trees will be illustrated:

        1. The possibility to separate syntactic constructs from semantics

        2. The ability to form AND/OR trees of arbitrary complexity, and
           application of DeMorgan's law to such trees with a single
           one-character APL primitive function.

        3. The simplicity of building (primitive) inference engines in
           just a few lines of APL code.

There will be three lectures lasting about 50 minutes each.  Dr. Jizba will
be giving the first two lectures and Ms. Nguyen will be giving the third
lecture.

Lecture 1       Define Artificial Intelligence. Describe basic idea of
                Expert Systems.  Compare LISP to APL with nested arrays.
                Define tree structures, and describe the specific concept
                of an APL scalar tree.  Show how recursuve functions are
                used to operate on nested arrays.

Lecture 2       Introduce Predicate Calculus, and illustrate how it can be
                implemented using APL nested structures.  Describe
                Production rules, and Inference Engine in APL implementation.
                Describe DRIVER function that allows English-like
                communication with user.

Lecture 3       (PTN) Understanding a sentence.  Kinds of sentences. Global
                and local dictionaries.  Meaning.  Syntactic sentences.
                History trace.  Handling of misspelled words and phrases.

U.S. Citizenship required in order to attend the presentation.

------------------------------

Date: 12 Jun 84 17:04:39 EDT
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Seminar - Learning Equation Solving Methods

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                  machine learning brown bag seminar

Title: Learning Equations Solving Methods from Worked Examples

Speaker:      Bernard Silver
              Dept. of Artificial Intelligence
              University of Edinburgh

Date:         Wednesday, June 27, 1984, 12:00-1:30
Location:     Hill Center, 7th floor lounge


       This talk will describe LP, a program that learns new techniques for
    solving  equations  by  examining  worked examples.  Unlike most of the
    work in this field, where the equations have been very simple, LP  uses
    equations  of A level standard (A levels are exams taken at 18, and are
    used for university selection.)

       In order to be able to successfully use a new technique,  LP  learns
    many  different types of information.  At the lowest level, LP compares
    consecutive lines in the worked example,  finding  differences  between
    them.  This allows the program to learn new rewrite rules.

       LP  also  tries  to  discover  the  strategic  purpose of each step,
    expressed in terms of  satisfying  preconditions  of  following  steps.
    From  this  viewpoint,  the  worked example can also be considered as a
    type of plan for solving the  equation.    LP  extracts  the  necessary
    information,  and  builds  a  plan  which is stored for future use.  LP
    executes the plan in a flexible way to solve new equations.

------------------------------

Date: Mon 11 Jun 84 00:17:24-PDT
From: Mabry Tyson <Tyson@SRI-AI.ARPA>
Subject: Workshop on Expert Systems

This is a repeat of an earlier announcement that went out on AILIST but
not that the date of acceptance for submissions has been moved back to
July 1.

[The original announcement appeared in AIList Vol. 2 #58, May 15, 1984.
I will send a full copy of this second announcement to anyone who requests
it. -- KIL]

                ---------------

Date:     Sun, 10 Jun 84 15:39 EST
From:     John Roach <roach%vpi.csnet@csnet-relay.arpa>
Subject:  workshop on expert systems

                           CALL FOR PAPERS
        IEEE Workshop on Principles of Knowledge-Based Systems
      Sheraton Denver Tex, Denver, Colorado, 3, 4 December 1984

Please send eight copies of a 1000-2000 word double-space, typed, summary of
the proposed paper to:

        Mark S. Fox
        Robotics Institute
        Carnegie-Mellon University
        Pittsburgh, Pennsylvania 15213

July 1, 1984 is the deadline for the submission of summaries.
Authors will be notifed of acceptance or rejection by July 23, 1984.
The accepted papers must be typed on special forms and received by the program
chairman at the above address by September 3, 1984.

General Chairman
John Roach
Dept. of Computer Science
Virginia Polytechnic Institute
Blacksburg, VA  24061  (703)-961-5368

[...]

------------------------------

Date: Thu 14 Jun 84 18:11:15-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Workshop on Reasoning

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                          Stanford Workshop On
                     PRACTICAL REASONONING AND PLANNING
               Sponsored by CSLI and the Philosophy Department
                              June 19-21

The workshop will involve researchers in philosophy and artificial
intelligence. Workshop organizers anticipate a productive interaction
centering on issues of belief, desire, intention, and action in humans and
machines.

All sessions will be held in Building 380 (Mathematics), Room 380Y,
unless otherwise specified.

SCHEDULE: Tuesday, June 19
                10:00 to 11:45  John Searle
                1:30 to  3:15   Drew McDermott
                 3:30 to  5:30  Allan Gibbard

          Wednesday, June 20
                10:00 to 11:45  Gilbert Harman
                 1:30 to  3:15  Thomas Hill
                 3:30 to  5:30  Patrick Hayes, David Israel
                 8:30 to 10:30  Richard Jeffery

          Thursday, June 21
                 9:00 to 10:30  Jon Doyle
                10:30 to 12:15  Hector-Neri Castaneda
                 2:00 to  3:45  James Allen

------------------------------

End of AIList Digest
********************

∂17-Jun-84  1531	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #74
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Jun 84  15:30:52 PDT
Date: Sun 17 Jun 1984 14:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #74
To: AIList@SRI-AI


AIList Digest            Sunday, 17 Jun 1984       Volume 2 : Issue 74

Today's Topics:
  AI Tools - Q'NIAL Request,
  Knowledge-Based Systems - Spelling Correctors,
  Metaphysics - Relevance of "Souls" to AI,
  Scientific Method - Mathematics,
  Linguistics - Commonsense Reasoning,
  Brain Theory - Processing Power,
  Conference - Hardware Design Verification
----------------------------------------------------------------------

Date: 15 Jun 1984 23:33-EST
From: Todd.Kueny@CMU-CS-G.ARPA
Subject: Q'NIAL

I have recently heard of a language developed in Canada
(Queens University?) called Q'Nial or Nial.  These folks have been at
some shows (USENIX) and have a demo system which looks alot like a lisp
with algol syntax.  Does anyone know about these guys?
Are there any technical papers?
(I think NIAL stands for Nested Inter???? Array Language.)

                                                        -Todd K.

------------------------------

Date: Fri, 15 Jun 84 11:58:54 PDT
From: Michael Pazzani <pazzani@AEROSPACE>
Subject: Spelling Correctors

I disagree with the statement that spelling correctors are knowledge-based
programs capable of outperforming even intelligent humans.

There are two basic parts to spelling correction: detection and correction.

In the case where there is more than one possible correction for a misspelled
word, people can of course use the context to find the correct spelling.
Selecting the proper choice is in many ways like selecting the intended
sense of a word.

Computers, of course, can be much better at detection of spelling errors
except when the misspelling is another word in the vocabulary.  I would
not call checking a word against a prestored vocabulary knowledge based
even with a complex root stripping capability.

------------------------------

Date: 13 Jun 84 19:51:25-PDT (Wed)
From: hplabs!hao!seismo!cmcl2!floyd!clyde!burl!ulysses!allegra!princet
      on!eosp1!robison @ Ucb-Vax.arpa
Subject: Re: elevance of "souls" to AI
Article-I.D.: eosp1.927

Philip Kahn, in his discussion of souls and essences, writes:
>> A "soul," like the concept of "essence," is undefinable.
>> The problem of "cognition" is far more relevent to the study of AI because
>> it can be defined within some domain...  Whether "cognition"
>> constitutes a "soul" is again not relevent..."

I submit that the concept of "soul" is irrelevant only if AI is doomed
to utter failure.  Use your imagination and consider a computer program
that exhibits many of the characteristics of a human being in
its ability to reason, to converse, and to be creative and unexpected in
its actions.  How will you AI-ers defend yourself if a distinguished
theologian asserts that G-d has granted to your computer program a soul?

If he might be right, the program, and its hardware must not be destroyed.
Perhaps it should not be altered either, lest its soul be lost.
The casual destruction, recreation and development of computer programs
containing souls will horrify many people.  You will face demonstrations,
destruction of laboratories, and government interference of the worst kind.

Start saving up now, for a defense fund for the first AI-er accused by
a district attorney of soul-murder.

On second thought, you have nothing to fear;  no one in AI is really trying
to make computers act like humans, right?

                                        - Toby Robison (not Robinson!)
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: Fri, 15 Jun 84 14:36:00 pdt
From: Harlan Sexton <hbs%BUGS@Nosc>
Subject: Mathematical Methods

  It is true that most mathematics papers contain
little of the sort of informal, sloppy, and confused thinking that
always accompanies any of the mathematical discovery that I have been
a party to, but these papers are written for and by professional
mathematicians in journals that are quite backlogged.
Also, although I have always been intrigued by the differences beween
modes of discovery among various mathematicians of my acquaintance,
I never found knowing how others thought about problems
of much use to me, and I think that most practicing mathematicians
are even less inclined to wonder about such things than I was when I
was a "real" mathematician.
  However, in response to the comment by David ???, I can only say that
I, and most of my fellow graduate students to whom I talked about such things,
had no trouble recalling the processes whereby we arrived at the ideas
behind proofs (and the process of proving something given an "idea"
was just tedious provided the idea was solid).
The process used to arrive at the idea, however, was as idiosyncratic
as the process one uses to choose a spouse, and it was generally as portable.
  I found it very useful to know WHAT people thought about various things,
and I learned a great deal from my advisor about valuable attitudes toward
PDE's, for example (sort of expert knowledge about what to expect from a
PDE), but HOW he thought about them was
not useful. (With the exception of the infamous Paul J. Cohen, I felt that I
appreciated HOW these other people thought; it was just that it felt like
wearing someone else's shoes to think that way. In Cohen's case we just
figured that Paul was so smart that he didn't have to think, at least like
normal people.)
  In the last year or so of my graduate career, someone came to the mathematics
department and interviewed a number of graduate students, including me,
about something which had to do with how we thought about mathematical
constructs (of very simple types which they specified). Presumably this
information, and related things, would be of some interest to Bundy. I'm
sorry that I can't be more specific, but if he would contact the
School of Education at Stanford (or maybe the Psychology Dept., but I think
this had to do with some project on mathematics education), they might be
able to help him. There is also a short book by J. Hadamard, published by
Dover, and some writings by H. Poincare', but as I recall these weren't
very detailed (and he probably knows of them already anyway). Finally,
I know that for a while Paul Cohen was interested in mathematical theorem
proving, and so he might have some useful information and ideas, as well.
(I believe that he is still in the Math. Dept. at Stanford. The AMS MAA SIAM
Combined Membership List should have his address.) --Harlan Sexton

------------------------------

Date: Fri 15 Jun 84 13:25:05-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Commonsense Reasoning?

I'm not sure whether the following probes our commonsense reasoning
ability or simply demonstrates a quirk of natural language:

  "The Monday class will meet on Tuesday next week.  The Wednesday
  class will thus be the day after the Monday class.  (We may
  decide to hold the Friday class on Wednesday and the Wednesday
  class on Friday if everyone can make it then.)"


Another example along the same line is:

  If 3 were half of 5, what would a third of 10 be?

Although it's easy enough to finesse the problem by claiming that
this is nonsense, most people would find the answer 4 to be quite
reasonable.  The answer is derived by following the chain 3:5/2
as 6:5 as 12:10 as 4:10/3, where ":" represents some unspecified
transformation that is assumed to be linear.  I consider this similar
to the nonlinear Monday:Tuesday reasoning above.

                                        -- Ken Laws

------------------------------

Date: 14 Jun 84 7:29:14-PDT (Thu)
From: ihnp4!cbosgd!rbg @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: cbosgd.20

> I believe there are aprroximately 10 to the 9th neurons in a human
> brain, if that's of any help.  Add in the glial cells (there is some
> debate about their function) and it comes to 10 to the 10th.
>  Bob Binstock

Those numbers are both wrong, but so was the number in my original posting.
Let me correct the numbers, and add the discussion to some other groups which
may or may not be interested.

Recent estimates of the number of neurons in the human brain have been
increasing, for a current estimated total of between 30x10↑9 and 50x10↑9.
Glial cells outnumber neurons by at least 10 to one, and occupy about half
the volume of the brain, but the ratio varies widely between brain regions,
and between species within a brain region.

To get an estimate of the computational equivalent of the brain:

Assume 5x10↑10 neurons with 2x10↑4 synapses each = 10↑15 synapses/brain.
Each synapse, on average, adds in a quantity about 20 times/sec (it can
go much faster, but not many do at the same time).  So that's 2x10↑16 very
simple approximate adds per second.

Even when everything is just right, a Cray can't do better than about 10↑9
simple integer adds per sec.
So, IF THE SYNAPSES ARE BEING USED WITH TOTAL EFFICIENCY FOR PERFORMING
THE TASK, a brain is worth about 10↑7 Crays.

[Credit for this calculation to Terry Sejnowski (Biophysics, Johns Hopkins)
and Geoff Hinton (Computer Science, Carnegie-Mellon)].

It is not surprising that most tasks use only a small fraction of this
capacity.  However, I think the computation of the amount of information
used to store sensory perceptions by hound!rfg may be misleading:

>>if I assume a visual field as 10**3 bits high by 10**4 bits wide
>>by 10 bits for color and shading of each element, we have 10**8 bits
>>per visual field. Suppose a life time of 72 years and 16 hours a day of
>>observing (neglecting "visual dreams" which may also be remembered),
>>with a new observation every 10 seconds. I multiply it all out to
>>about 1.5 x 10**16 bits. (187,500 billion bytes?)
>>Adding audio, tactile, olfactory, taste to that ought to easily run the
>>total over 200 gigabytes. That's just for remembering observations
>>(eidetically, which is a faculty some do have).

Most people do not remember every detail of every scene they ever see.
How much of your early childhood (0-4) do you even remember at all?
Emotional content of a situation can have a large impact on what and how
much you recall.  Dangerous or joyful experiences stand out in memory
more than most neutral events.

The role of language is also an important issue in considering the storage
and information processing capacity of the brain.  Using a word to stand
for the many features which make up an object or a concept is an incredible
data compression.  This may be why the gradual increase in computational
ability across primate evolution is not a very satisfying explanation for
the quantum leap in human intellectual ability.  Many of the explanations
of the origin of consciousness rely on the advantages of language for
improving analytic ability.  The one I like best is Julian Jaynes idea
(The Origin of Consciousness in the Breakdown of the Bicameral Mind):
that consciousness is not just a simple consequence of language, but
that the exponential growth in knowledge fostered by language generates
self-consciousness only after certain kinds of concepts are introduced
into language.  This allows him to trace the evolution of consciousness
by literary analysis!

Rich Goldschmidt    -- a former brain hacker (now reformed?)
cbosgd!rbg

------------------------------

Date: Wednesday, 13 June 1984 11:41:32 EDT
From: Mario.Barbacci@cmu-cs-spice.arpa
Subject: call for papers

                        CALL FOR PAPERS

                WORKSHOP ON HARDWARE DESIGN VERIFICATION
                        November 26-27, 1984
                Technical University of Darmstadt, F.R. Germany

This workshop is organized by IFIP Working Groups 10.2 and 10.5. Program
will cover all aspects of verification methods for hardware systems,
including:

        Correctness of hardware design,
        Tools and methodologies for verification,
        Verification of multilevel descriptions,
        Timing verification,
        Temporal logic,
        Correctness by construction,
        Circuit extractors,
        Design rule checkers,
        Language issues,
        Application of AI techniques.

PARTICIPATION IS BY INVITATION ONLY. If you would like to propose a
contribution to the workshop send a short summary of the intended
presentation to the Workshop Chairman before July 31, 1984. Notices of
acceptance will be sent by September 15, 1984.

Workshop Committee:

Hans Eveking (Chairman)                 Stephen Crocker
Institut fuer Datentechnik              Aerospace Corporation
Technical University of Darmstadt       P.O. Box 92957
D-6100 Darmstadt                        Los Angeles
Fed. Rep. Germany                       California 90009
(49) (6151) 162075

George J. Milne                         Robert Piloty
Computer Science Department             Institut fuer Datentechnik
University of Edinburgh                 Technical University of Darmstadt
Edinburgh, Scotland                     D-6100 Darmstadt
United Kingdom                          Fed. Rep. Germany

------------------------------

End of AIList Digest
********************

∂20-Jun-84  1154	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #75
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 Jun 84  11:53:10 PDT
Date: Wed 20 Jun 1984 10:44-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #75
To: AIList@SRI-AI


AIList Digest           Wednesday, 20 Jun 1984     Volume 2 : Issue 75

Today's Topics:
  Expert Systems - Regression Analysis,
  AI Tools - Q'NIAL & Pandora Project,
  Conference - AAAI-84 Program Now Available,
  AI News - Army A.I. Grant to Texas,
  Standards - Maintaining High Quality in AI Products,
  Social Implications - Artificial People,
  Seminar - Precondition Analysis
----------------------------------------------------------------------

Date: Wed, 20 Jun 84 12:27:07 EDT
From: "Ferd Brundick (VLD/LTTB)" <fsbrn@Brl-Voc.ARPA>
Subject: request for information

Hi,

Does anyone know of any expert systems to aid regression analysis ?
I've been told that Bell Labs is working in the area of AI data
analysis; William Gayle is reportedly developing a program called REX.
I would appreciate any information in this area (net addresses, phone
numbers, references, etc).  Thanks.

                                        dsw, fferd
                                        Fred S. Brundick
                                        USABRL, APG, MD.
                                        <fsbrn@brl-voc>

[Bill Gayle has been developing an expert system interface to the
Bell Labs S statistical package.  I believe it is based on the
Stanford Centaur production/reasoning system and that it uses
"pipes" to invoke S for analysis and display services.  Gayle's
system currently has little expertise in analyzing residuals,
but it does know what types of transformations might be applied
to different data types.  It is basically a helpful user interface
rather than an automated analysis system.

Rich Becker, one of the developers of S, has informed me that
source code for S is available.  Call 800-828-UNIX for information,
or write to

        AT&T Technologies Software Sales
        PO Box 25000
        Greensboro, NC 27420

For a description of the S package philosophy see Communications of
the ACM, May 1984, Vol. 27, No. 5, pp. 486-495.

Another automated data analysis system is the RADIX (formerly RX)
system being developed at Stanford by Dr. Robert Blum and his students.
It has knowledge about drug interactions, symptom onset times, and
other special considerations for medical database analysis.  It is
designed to romp through a database looking for interesting correlations,
then to design and run more (statistically) controlled analyses to
attempt confirmation of the discovered effects.

                                        -- Ken Laws ]

------------------------------

Date: Tue 19 Jun 84 12:44:50-EDT
From: Michael Rubin <RUBIN@COLUMBIA-20.ARPA>
Subject: Re: Q'NIAL

According to an advertisement I got, NIAL is "nested interactive array
language" and Q'NIAL is a Unix implementation from Queen's University at
Kingston, Ontario.  It claims to be a sort of cross between LISP and APL with
"nested arrays" instead of APL flat arrays or LISP nested lists, "structured
control constructs... and a substantial functional programming subset."  The
address is Nial Systems Ltd., 20 Hatter St., Kingston, Ontario K7M 2L5 (no
phone # or net address listed).  I don't know anything about it other than what
the ad says.

------------------------------

Date: Sun 17 Jun 84 16:28:44-EDT
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Pandora Project

   In the July 1984 issue of Esquire appears an article by Frank Rose
entitled "The Pandora Project." Rose provides some glimpses into work
at Berkeley by Robert Wilensky and Joe Faletti on the commensense
reasoning programs, PAMELA and PANDORA.

--Wayne McGuire

------------------------------

Date: 17 June 1984 0019-EDT
From: Dave Touretzky at CMU-CS-A
Subject: AAAI-84 Program Now Available

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

The program for AAAI-84, which lists papers, tutorials, panel discussions,
etc., is now available on-line, in the following files:

        TEMP:AAAI84.SCH[C410DT50]       on CMUA
        <TOURETZKY>AAAI84.SCH           on CMUC
        [g]/usr/dst/aaai84.sch          on the GP-Vax

The program is 36 pages long if you print it on the dover in Sail10 font.

------------------------------

Date: Tue 19 Jun 84 18:26:11-CDT
From: Gordon Novak Jr. <CS.NOVAK@UTEXAS-20.ARPA>
Subject: Army A.I. Grant to Texas

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

The U.S. Army Research Office, headquartered in Research Triangle Park,
North Carolina, has announced the award of a contract to the University
of Texas at Austin for research and education in Artificial Intelligence.
The award is for approximately $6.5 million over a period of five years.

The University of Texas has established an Artificial Intelligence
Laboratory as an organized research unit.  Dr. Gordon S. Novak Jr. is
principal investigator of the project and has been named Director of
the Laboratory.  Dr. Robert L. Causey is Associate Director.
Other faculty whose research is funded by the contract and who will be
members of the Laboratory include professors Robert F. Simmons, Vipin
Kumar, and Elaine Rich.  All are members of the Department of Computer
Sciences except Dr. Causey, who is Chairman of the Philosophy Department.

The contract is from the Electronics Division of the Army Research Office,
under the direction of Dr. Jimmie Suttle.  The contract will provide
fellowships and research assistantships for graduate students, faculty
research funding, research computer equipment, and staff support.

The research areas covered by the Army Research Office contract include
automatic programming and solving of physics problems by computer (Novak),
computer understanding of mechanical devices described by English text
and diagrams (Simmons), parallel programs and computer architectures for
solving problems involving searching (Kumar), reasoning under conditions
of uncertainty, and intelligent interfaces to computer programs (Rich).

------------------------------

Date: Tuesday, 19-Jun-84 12:19:22-BST
From: BUNDY HPS (on ERCC DEC-10) <Bundy%edxa@ucl-cs.arpa>
Subject: Maintaining High Quality in AI Products

        Credibility has always been a precious asset for AI, but never
more so than now.  We are being given the chance to prove ourselves.  If
the range of AI products now coming onto the market are shown to
provide genuine solutions to hard problems then we have a rosy future.
A few such products have been produced, but our future could still be
jeopardized by a few, well publised, failures.

        Genuine failures - where there was determined, but ultimately
unsuccesful, effort to solve a problem - are regretable, but not fatal.
Every technology has its limitations.  What we have to worry about are
charlatans and incompentents taking advantage of the current fashion
and selling products which are overrated or useless.  AI might then be
sigmatized as a giant con-trick, and the current tide of enthusiasm
would ebb as fast as it flowed.  (Remember Machine Translation - it
could still happen.)

        The academic field guards itself against charlatans and
incompentents by the peer review of research papers, grants, PhDs, etc.
There is no equivalent in the commercial AI field.  Faced with this
problem other fields set up professional associations and codes of
practice.  We need a similar set-up and we needed it yesterday.  The
'blue chip' AI companies should get together now to found such an
association.  Membership should depend on a continuing high standard of
AI product and in-house expertise.  Members would be able to advertise
their membership and customers would have some assurance of quality.
Charlatans and incompetents would be excluded or ejected, so that the
failure of their products would not be seen to reflect on the field as
a whole.

        A mechanism needs to be devised to prevent a few companies
annexing the association to themselves and excluding worthy
competition.  But this is not a big worry.  Firstly, in the current state
of the field AI companies have a lot to gain by encouraging quality in
other companies.  Every success increases the market for everyone,
whereas failure decreases it.  Until the size of the market has been
established and the capacity of the companies risen to meet it, they
have more to gain than to lose by mutual support.  Secondly, excluded
companies can always set up a rival association.

        This association needs a code of practice, which members would
agree to adhere to and which would serve as a basis for refusing
membership.  What form should such a code take, i.e.  what counts as
malpractice in AI?  I suspect malpractice may be a lot harder to define
in AI than in insurance, or medicine, or travel agency.  Due to the
state of the art, AI products cannot be perfect.  No-one expects 100%
accurate diagnosis of all known diseases.  On the other hand a program
which only works for slight variations of the standard demo is clearly
a con.  Where is the threshold to be drawn and how can it be defined?
What consitutes an extravagent claim?  Any product which claims to:
understand any natural language input, or to make programming
redundant, or to allow the user to volunteer any information, sounds
decidedly smelly to me.  Where do we draw the line?  I would welcome
suggestions and comments.

                Alan Bundy

------------------------------

Date: 22 Jun 84 6:44:56-EDT (Fri)
From: hplabs!tektronix!uw-beaver!cornell!vax135!ukc!west44!greenw @
      Ucb-Vax.arpa
Subject: Human models
Article-I.D.: west44.243


[The time has come, the Walrus said, to talk of many things...]

        Consider...
        With present computer technology, it is possible to build
 (simple) molecular models, and get the machine to emulate exactly
 what the atoms in the `real` molecule will do in any situation.

        Consider also...
        Software and hardware are getting more powerful; larger models
can be built all the time.

[...Of shoes and Ships...]

        One day someone may be able to build a model that will be an exact
duplicate of a human brain.
        Since it will be perfect down to the last atom, it will also be
able to act just like a human brain.
        i.e. It will be capable of thought.

[...And Sealing Wax...]

        Would such an entity be considered `human`, for, though it would
not be `alive` in the biological sense, someone talking on the telephone
to its very sophisticated speech synthesiser, or reading a letter typed from
it would consider it to be a perfectly normal, if not rather intelligent
person.
        Hmmmmmm.

        One last thought...
        Even if all the correct education could be given it, might it still
suffer from the HAL9000 syndrome [2001]; fear of being turned off if it
did something wrong?

[...of Cabbages and Kings.]

Jules Greenwall,
Westfield College, London, England.

from...

     vax135            greenw            (UNIX)
         \            /
   mcvax- !ukc!west44!
         /            \
     hou3b             westf!greenw      (PR1ME)


The MCP is watching you...
End of Line.

------------------------------

Date: 18 Jun 84 13:27:47-PDT (Mon)
From: hplabs!hpda!fortune!crane @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: fortune.3615

Up to this point the ongoing discussion has neglected to take two
things into account:

        (1) Subconscious memory - a person can be enabled (through
        hypnosis or by asking him the right way) to remember
        infinite details of any experience of this or prior life
        times. Does the mind selectively block out trivia in order
        focus on what's important currently?

        (2) Intuition - by this I mean huge leaps into discovery
        that have nothing to do with the application of logical
        association or sensual observation. This kind of stuff
        happens to all of us and cannot easily be explained by
        the physical/mechanical model of the human mind.

I agree that if you could build a computer big enough and fast
enough and taught it all the "right stuff", you could duplicate
the human brain, but not the human mind.

I don't intend to start a metaphysical discussion, but the above
needs to be pointed out once in a while.

John Crane

------------------------------

Date: Wed 20 Jun 84 10:01:39-PDT
From: WYLAND@SRI-KL.ARPA
Subject: The Turing Test - machines vs people

        Tony Robison (AIList V2 #74) and his comments about
machine "soul" brings up the unsettling point - what happens when
we make a machine that passes the Turing test?  For:

  o  One of the goals of AI (or at least some workers in the
     field - hedge, hedge) is to make a machine that will pass
     the Turing test.

  o  Passing the Turing test means that you cannot distinguish
     between man and machine by their written responses to
     written questions (i.e., over a teletype).  Today, we could
     extend the definition to include oral questions (i.e., over
     the telephone) by adding speech synthesis and recognition.

  o  If you cannot tell the difference between person and machine
     by the formal social interaction of conversation, *how will
     the legal and social systems differentiate between them!!*

     Our culture(s) is set up to judge people using conversation,
written or oral: the legal arguments of courts, all of the
testing through schools, psychological examination, etc.  We have
chosen the capability for rational conversation (including the
potential capability for it in infants, etc.) as the test for
membership in human society, rejecting membership based on
physical characteristics such as body shape (men/women,
"foreigners") and skin color, and the content of the
conversations such as provided by cultural/ religious/political
beliefs, etc.  If we really do make machines that are
*conversationally indistinguishable* from humans, we are going to
have some interesting social problems, whether or not machines
have "souls".  Will we have to reject rational conversation as
the test of membership in society?  If so, what do we fall back
on?  (The term "meathead" may become socially significant!)  And
what sort of interesting things are going to happen to the
social/legal/religious systems in the meantime?

Dave Wyland
WYLAND@SRI

P.S.    Asimov addressed these problems nicely in his renowned "I,
Robot" series of stories.

------------------------------

Date: 18 Jun 1984  14:21 EDT (Mon)
From: Peter Andreae <PONDY%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Precondition Analysis

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

         PRECONDITION ANALYSIS - LEARNING CONTROL INFORMATION


                            Bernard Silver

                 Dept. of AI, University of Edinburgh

                       2pm Wednesday, June 20.
                          8th Floor Playroom


I will describe LP, a program that learns equation solving strategies from
worked examples.  LP uses a new learning technique called Precondition
Analysis.  Precondition Analysis learns the control information that is
needed for efficient progblem solving in domains with large search spaces.

Precondition Analysis is similar in spirit to the recent work of Winston,
Mitchell and DeJong.  It is an analytic learning technique, and is capable
of learning from a single example.

LP has successfully learned many new equation solving strategies.

------------------------------

End of AIList Digest
********************

∂21-Jun-84  2327	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #76
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Jun 84  23:27:13 PDT
Date: Thu 21 Jun 1984 22:03-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #76
To: AIList@SRI-AI


AIList Digest            Friday, 22 Jun 1984       Volume 2 : Issue 76

Today's Topics:
  VLSI - Panel on Chips for AI & Trilogy CPU Failure,
  Databases - Oxford English Dictionary goes On-Line,
  Logic - Common Sense Summer,
  Mind & Brain - Artificial People & Neural Connections & Recall,
  Seminar - Natural Language Parsing
----------------------------------------------------------------------

Date: 20 June 1984 0512-EDT
From: Dave Touretzky at CMU-CS-A
Subject: panel on chips for AI

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Dana Seccombe is looking for people to participate in a panel discussion
at ISSCC (International Solid State Circuits Conference) to be held in
February '85 in New York City.  The topic of the panel is issues in the
realization of AI systems using VLSI technology, e.g. AI inference
engines, 5th generation architectures, or Lisp processors that are or
could be implemented using VLSI.

If you would be interested in participating in this panel, please contact
Mr. Seccombe at (408) 257-7000 x4854.  DON'T contact me, because I don't
know any more about it than what you've just read.

------------------------------

Date: 19 Jun 1984 11:07:46-EDT
From: Doug.Jensen at CMU-CS-G
Subject: Trilogy CPU design fails

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

After 4 years and $220 million, Gene Amdahl's Trilogy Corp. has declared
their attempt to build a computer from 2.5" diameter whole wafer VLSI a
failure. They never got even one wafer functioning correctly much less ever
powered up a machine. Trilogy thus follows in the path of TI and many other
whole wafer failures before them over the past decade; the others were less
known because they were military projects. Trilogy was one of, and probably
THE, most publicized and heavily funded new startup in the history of the
computer business. They were spending $7 million/month and estimated that
they would need at least another $100 million just to get them to mid-85,
while their first machine was still two years beyond that (more than two
years later than they estimated when they started in 1980). Each 2.5" wafer
was to contain about 60K ECL gates, with four layers of metalization, and
dissipate about 1000 watts. The CPU was to have nine wafers and excute 32
MIPS. Trilogy was even further behind on the other computer subsystems. They
now say they may try a smaller machine, or just subsystems (e.g., memories),
or just wafers and related technology. DEC, Sperry, and CII-HB were among
the investors in Trilogy.

------------------------------

Date: 13-Jun-84 02:30 PDT
From: William Daul  Augmentation Systems Division / MDC
Subject: Oxford English Dictionary goes On-Line

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

LONDON -- ...the Oxford University Press has announced plans to
publish a computerized version of the venerable Oxford English
Dictionary.

With the help of a $1.4 million donation from IBM United Kingdom Ltd.,
the British publisher will produce the first fully integrated edition
of the 13-volume dictionary since the original work was begun in 1884.
That first edition took 44 years to complete; the publisher said it
will be able to complete the second edition in a fraction of that
time.

...

The New Oxford English Dictionary, as the new version has been named,
will constitute the largest electronic dictionary data base in the
world.  The present multi-volume version consists of more than 20,000
printed pages.  Computerization of the dictionary is a massive
undertaking that will involve the data entry of about 60 million words
used to record, describe and illustrate 500,000 words and phrases.
The Oxford University Press has hired International Computaprint Corp.
of Fort Washington, Pa., to do the data entry.  A staff of 120 people
has been assigned the task of completing the data entry by this
September.

...  Additionally, the company (IBM) is providing two data processing
specialists who will work on the dictionary project for two years.

Once the electronic dictionary is finished, it could be made available
on-line, on magnetic tape, on laser/video disk or possible, on a
single integrated circuit...

The publisher estimated the project will cost $10 million.  The
British government awarded the company a 3 year grant of roughly
$420,000 -- or 25% of the development cost -- for the dictionary.

The University of Waterloo in Ontario will conduct a survey for the
publisher of the potential users of an electronic dictionary.  The
university will also help develop software that would be needed to
take advantage of an electronic dictionary.

------------------------------

Date: Wed 20 Jun 84 22:06:12-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Newsletter, June 21, No. 37

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                        COMMON SENSE SUMMER

CSLI is sponsoring a summer-long workshop called "Common Sense Summer."
It has long been agreed that language use and intelligent behavior in
general require a great deal of knowledge about the commonsense world.  But
heretofore no one has embarked on a large-scale effort to encode this
knowledge.  The aim of Common Sense Summer is to make the first three months
of such an effort.  We are attempting to axiomatize in formal logic
significant amounts of commonsense knowledge about the physical,
psychological and social worlds.  We are concentrating on eight domains:
shape and texture, spatial relationships, lexical semantics of cause
and possession, properties of materials, certain mental phenomena,
communication, relations between textual entities and entities in the world,
and responsibility. We are attempting to make these axiomatizations mutually
consistent and mutually supportive.  We realize, of course, that all that
can be accomplished during the summer is tracing the broad outlines of each
of the domains and, perhaps, discovering several hard problems.

Nine graduate students from several universities are participating in the
workshop full-time.  In addition, a number of other active researchers in
the fields of knowledge representation, natural language, and vision are
participating in meetings of various sizes and purposes.  There will be two
or three presentations during the summer, giving progress reports for the
general public.  The workshop is being coordinated by the writer.

                                        --Jerry Hobbs

[...]

------------------------------

Date: Wed, 20 Jun 84 17:25:03 PDT
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Human Models

The foundation of the reasoning constructed by Jules Greenwall in his note
depends on being able to specify exactly the behavior of atoms in molecules.
The precise description required depends on the molecular physics.  Unfor-
tunately study is still going on.  The study of the molecule is a many-body
problem for which there is no closed-form solution.  Another fly in the
ointment is the fact that the behavior of atoms in molecules depends, albeit
in second order, on the nature of the nucleus.  This is another branch of
physics that is very active, i.e. much is not known.  What one would get
for a model built on such a fuzzy foundation is of dubious value.

  --Charlie

------------------------------

Date: 18 Jun 84 10:07:07-PDT (Mon)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!aplvax!lwt1 @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: aplvax.663

The other thing to note is that while each 'memory cell' in a computer
has ~2 connections, each 'memory cell' in the brain has ~100.  Since
processing power is relative to (cells * connections), a measure of
relative capacities is not sufficient for comparison between the brain
and the CRAY.


                                                -Lloyd W. Taylor
                                                 ... seismo!umcp-cs!aplvax!lwt1
            ---I will have had been there before, soon---

------------------------------

Date: Thu, 21 Jun 84 06:39 EDT
From: dmrussell.pa@XEROX.ARPA
Subject: Objection to Crane: A Quick Question - Mind and Brain --   V2

Sorry, but I must make a serious objection to your claim that

"...    a person can be enabled (through
        hypnosis or by asking him the right way) to remember
        infinite details of any experience of this or prior life
        times ... "

I object to the use of the term "infinite" in describing memory.  That
simply isn't true.  If you just mean "large number", then say so.  The
infinite memory capacity problem was addressed once (in either AIDigest
or HumanNets, I've forgotten) and found indefensible.

The phrase "prior life times" assumes reincarnation, a completely
unsupported assumption.

"of any experience" demands that all experiences can be recalled, not
just *recognized*, or *restored* but recalled!  Do you really want the
references to show that this isn't true?  Memory recall under hypnosis
has been found to be just as reconstructive (perhaps more so) as normal
memory.  Hypnotic states buy you some recall, but not that much!

We haven't taken these things into account because they simply aren't
true, or at the very least, can't be supported by anything other than
religious belief.

  -- D.M. Russell. --

------------------------------

Date: 18 Jun 84 15:08:10-PDT (Mon)
From: ihnp4!ihldt!stewart @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: ihldt.2382

>       (1) Subconscious memory - a person can be enabled (through
>       hypnosis or by asking him the right way) to remember
>       infinite details of any experience of this or prior life
>       times.

I don't know where the "prior life" part came from, but this claim is
usually an incorrect extrapolation of studies that indicate no such
thing.

What has been established is that people can be induced to remember
things that they considered forgotten.  This isn't by a long shot
the same thing as saying that we remember everything that's ever
happened to us.

If you have evidence to support this claim, by all means present it.  If
not, please spare us.

Bob Stewart
ihldt!stewart

------------------------------

Date: Thu, 21 Jun 84 08:23 EDT
From: Dehn@MIT-MULTICS.ARPA (Joseph W. Dehn III)
Subject: Turing test - legal implications

...computers someday might act like people...  ...legal system is based
on capability for rational conversation...  ...what will we do????...
...will we have to reject rational conversation as the test of
membership in society?...

Sorry, I must have forgotten, but why exactly do we WANT to distinguish
between humans and machines?

                            -jwd3

------------------------------

Date: Thu, 21 Jun 84 14:14 EST
From: Huhns <huhns%scarolina.csnet@csnet-relay.arpa>
Subject: Seminar - Natural Language Parsing


             CONSTRAINT PROPAGATION SENTENCE PARSING

                         Somnuek Anakwat

                 Center for Machine Intelligence
                     College of Engineering
                  University of South Carolina

                 2pm Thursday, June 21, Room 230

     An algorithm for parsing English sentences by the method  of
constraint  propagation is presented.  This method can be used to
recognize English sentences and indicate whether those  sentences
are  syntactically  correct  or  incorrect  according  to grammar
rules.  The central idea of constraint  propagation  in  sentence
analysis  is  to  form  all possible combinations of the parts of
speech from adjacent  words  in  the  input  sentence,  and  then
compare   those  combinations  with  English  grammar  rules  for
allowable combinations.  The parts of speech for each word may be
modified,  left  alone,  or  eliminated according to these rules.
The analysis  of  these  combinations  of  the  parts  of  speech
normally  proceeds  from  left  to  right.   The most significant
feature of the algorithm presented is  that  grammar  constraints
propagate  backward  when  it is possible.  The algorithm is very
useful when the given sentence contains words which have multiple
properties.    The  algorithm  also  has  an  efficient  parallel
implementation.

     Results  of  applying  the  algorithm  to  several   English
sentences  are  included.   An  interpretation of the algorithm's
performance and some topics for future research are discussed  as
well.

------------------------------

End of AIList Digest
********************

∂22-Jun-84  0657	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #77
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Jun 84  06:56:03 PDT
Date: Fri 22 Jun 1984 05:12-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #77
To: AIList@SRI-AI


AIList Digest            Friday, 22 Jun 1984       Volume 2 : Issue 77

Today's Topics:
  AI Tools - Q'NIAL,
  Cognition - Mathematical Methods & Commonsense Reasoning,
  Books - Softwar, A New Weapon to Deal with the Soviets
----------------------------------------------------------------------

Date: 19 Jun 84 14:59:27-PDT (Tue)
From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!janney @ Ucb-Vax.arpa
Subject: Re: Q'NIAL
Article-I.D.: unm-cvax.962

The April 1984 issue of Computer Design has an article on Nial
(Nested Interactive Array Language).

------------------------------

Date: 18 Jun 84 15:10:07-PDT (Mon)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!jcz @ Ucb-Vax.arpa
Subject: Re: Mathematical Methods
Article-I.D.: ncsu.2622

It is not surprising that mathematicians cannot
remember what they do when they first
construct proofs, especially 'difficult' proofs.

Difficult proofs probably take quite a bit of processing power,
with none left over for observing and recording what was done.

In order to get a record of what exactly occurs ( a 'protocol' )
when a proof is being constructed, we would have to interrupt the
subject and get him to tell us what he is doing - interferring with the precise
things we want to measure!

There is much the same problem with studying how programmers
write programs.    We can approach a recording by saving every scrap of paper
and recording every keystroke, but that is not such a great clue
to mental processes.

It would be nice if some mathematician would save EVERY single scrap of
paper ( timestamped, please! ) involved in a proof, from start to finish.
Maybe we would find some insight in that. . .

John Carl Zeigler
North Carolina State University

------------------------------

Date: Wed, 20 Jun 84 20:39:01 edt
From: Roger L. Hale <rlh@mit-eddie>
Subject: Re: Commonsense Reasoning?

        From: Roger L. Hale <rlh@mit-eddie>
        Subject: Re: Commonsense Reasoning?

        I get 4 quite a different way:
          If 3 (2) were half of 5 (4), what would a third of 10 (9) be? 4 (3).
        This way twice 5 (4) is 9 (8), [rather than twice 6 (5) is 12 (10)
        the way you describe.]

The transformation I have in mind is "say 3 and mean 2", which is simply
difference-of-1.  The numbers I *mean* are in the stated relations
(half, a third, twice) but they are renamed by a distorting filter,
a homomorphism.  "If arithmetic were shifted right one, what would half of 5,
a third of 10, twice 5 be?  (Answer: 3, 4 and 9.)"  Partly it is a different
choice of who to believe, the numbers or the relations; but I find this
form most compelling due to the components being so fundamental.


The extended proportion [for your method in parallel form] would be
        3 : 5/2 :: 4 : 10/3 :: 12 : 2*5,
the 12 (v. 9) serving to show that our two methods differ concretely.

        I think that the critical point for AI is that we make sense of
        a nonsense problem by postulating an unmentioned linear
        transformation since only a linear transformation permits a unique
        solution.  [...]  -- KIL

In the first place, any critically constrained transformation has discrete
(locally unique) solutions, barring singularities; and it is false that
they are only unique for linear transformations:  it takes a fairly special
domain, like the complex analytic, to make it true.  In the second place,
what confidence should one gain in a theory on fixing a free parameter
against one datum?  Surely one should aim to constrain the theory as well
as the parameter, and you have used up all your constraints.  Where would
we be if twice 5 were neither 12 nor 9?   ?8-[   Back to square one.

                                Yours in inquiry,
                                Roger Hale
                                rlh%mit-eddie@mit-mc

------------------------------

Date: 19 Jun 84 14:00:58-PDT (Tue)
From: hplabs!tektronix!orca!shark!brianp @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: shark.836

     About "if 3 is half of 5, what is a third of 10?"

It is interesting to note the assumptions that might be made here.
One could assume that all numbers retain their good-old standard meaning,
except 3, when compared to 5.  Then the chain of relationships
(3:5/2, 6:5, 12:10, 4:10/3) can be made.  What I first thought was
"so what's a '10'? "  I.e, let's toss out all the definitions of the
numbers along with 3.  'Half' could be redefined, but that says
nothing about what to do with 'third'.  One could redefine 'is',
in effect, making it mean the ':' relation of the previous article.


Anybody have hypotheses on which assumptions or definitions one would
tend to drop first, when solving a puzzle of this sort?

                                Brian Peterson
                                ...!ucbvax!tektronix!shark!brianp

------------------------------

Date: 19 Jun 84 18:34:05-PDT (Tue)
From: hplabs!tektronix!orca!tekecs!davep @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: tekecs.3861


>   From: brianp@shark.UUCP (Brian Peterson)
>
>   It is interesting to note the assumptions that might be made here.
>   One could assume that all numbers retain their good-old standard meaning,
>   except 3, when compared to 5.  Then the chain of relationships
>   (3:5/2, 6:5, 12:10, 4:10/3) can be made.

If one redefines "3, when compared to 5", shouldn't the 3 be redefined in all
instances of the "chain of relationships"? If so, one could conclude that
one-"third" of 10 is 24/5 via 3:5/2, 6:5, 12:10, 12/(5/2):10/3, 24/5:10/3.


                                            David Patterson
                                            Tektronix, Inc.
                                            Wilsonville Industrial Park
                                            P.O. Box 1000
                                            Wilsonville, Oregon 97070
                                            (503) 685-2568

{ucb or dec}vax!tektronix!tekecs!davep      uucp address
davep@tektronix                             csnet address
davep.tektronix@rand-relay                  arpa address

------------------------------

Date: Wed 20 Jun 84 18:37:45-PDT
From: Jean-Luc Bonnetain <BONNETAIN@SU-SCORE.ARPA>
Subject: softwar, a new weapon to deal with the Soviets ?

         [Forwarded from the Stnaford bboard by Laws@SRI-AI.]

This is my translation of an article published in a French news magazine,
"Le Point"; i have done my best to translate it, but i am sure there are
some inadequacies. I just hope they don't occur in important places.

I am just wondering if any one has heard about that, and if this is real,
pure computer fiction or so well known that it's not worth flaming about.




"Between the atomic bomb and conventional weapons, there was nothing in the
American warfare equipment against the USSR. Now the time has come for
"soft bombs", to launch a destructive war without any bloodshed. This is the
topic of "Softwar", a forthcoming book written by a French computer scientist
working in New York. The idea: as simple as it is machiavelic. In the programs
that Soviet people get from Western countries are placed what amounts to "time
bombs": devices that can be triggered from afar to hamper the functioning of
Russian computers and paralyze the economy. With "Softwar", nuclear blackmail
becomes obsolete. Le Point asked the author, Thierry Breton, how his relations
with highly skilled American engineers has convinced him of the existence of
the new type of weapon.

LePoint:
is "Softwar" just an computer thriller, or do "soft bombs" really exist ?

ThierryBreton:
I never used any, but they have been used for a few years already in our
trade. Some countries from Africa or South America, who are customers of
big American software companies, have booby-trapped programs running in their
administrations. The aim of the providers of the software is to be protected
against customers who won't pay. These soft bombs are set in vital areas,
like payroll routines, which are then paralyzed. The customer has to call the
company, and won't get any help until debts are cleared. In this case people
talk about technical problems in the computer, but obviously never say that
the program contained a bomb.
Since now, these techniques had never been used for aggressive purposes. But
there is absolutely no technical difficulty in doing that, so we are led to
believe that this new weapon could be used through non strategic networks
giving access to databases. For example, the Stockex network, which gives
information on stock exchange values, or the WMO network, about worldwide
meteorological information.

LePoint:
Has softwar begun yet ?

ThierryBreton:
For me, there is no doubt about that. The Soviets use 80% of the American
databases. It is this dependency on communication between computer which is
new, and which allows to enter a territory. Until now, the "bombs" had to be
triggered on the spot by someone inside the place. The bombs were there, but
could not be triggered remotely. Today, thanks to data transfer, they can be
reached from thousands of kilometers. In the book, I imagine that one bomb is
controlled, through Stockex, by the rate of exchange for a particular company
determined in the software, and the Pentagon, as long as it does not want to
detonate the "bomb", avoid the critical value by buying or selling actions.

LePoint:
You give some names of American organisms working for the Pentagon whose work
is to set bombs in the programs, and to activate them. Is this real ?

ThierryBreton:
The names quoted have been slightly modified from the real ones. I took my data
from a group founded in 1982 by the American Army, called NSI (National
Software Institute). This institute works on all programs which have military
applications. In 1983, the Army has spent 500 million dollars to debug its
programs. Written in different languages, they have now been unified by the ADA
language. This is the official objective of NSI. But for these military
computer scientists, there is not much difference between finding unvoluntary
errors and adding voluntary ones...

LePoint:
What is the Trojan horse used to send those soft bombs to the USSR ?

ThierryBreton:
The USSR has a lag of about 10 to 15 years in computer science, which is the
equivalent of 2 or 3 new generations of computers. This lag in hardware
causes an even more important lag, in artificial intelligence, which is the
type of software running on the machines Soviet people have to buy from Western
countries. They are very eager to get those programs, and some estimate that
60% of the software running there comes from the USA. The most important source
is India, which has very good computer scientists. Overnight, IBM has been
kicked out, to be replaced by Soviet Elorg computers ES10-20 and ES 10-60,
which are copied from IBM. The Indians buy software from Western countries,
port it to Elorgs, and then this software goes to the USSR.

LePoint:
Can a trap be invisible, like a buried mole ?

ThierryBreton:
Today, people know how to make bombs completely invisible. The first generation
was fixed bombs, lines of code never activated unless a special signal was
sent. Then the Polaris-type traps: like for the rockets, the programs contain
baits to fool the enemy, multiple traps, only one of which is active. Then the
stochastic bomb, the most dangerous one, which moves in the program each time
it is loaded. These bombs are all the more discreet that they can be stopped
from a distance, failures then disappearing in an unexplicable way.

LePoint:
Have there been cases in USSR of problems that could be explained by a soft
bomb ?

ThierryBreton:
Some unexplained cases, yes. In November 1982, the unit for international
phone calls has been down for 48 hours. Officially, the Soviets said it was
a failure of the main computer. We still have to know what caused it. Every
day in the Soviet papers one can read that such and such factory had to stop
its production because of a shortage of some items. When the Gosplan computers
break down, there are direct consequences on the production and functioning of
factories.

LePoint:
By talking about softwar, aren't you helping the Soviets ?

ThierryBreton:
No. For 30 years, we have seen obvious attempts from the Soviets to destabilize
Western countries by infiltrating trade unions, pacifist movements. The Eastern
block can remotely cause strikes. But since now, there was now way to retaliate
by doing precise desorganizing actions. In the context of the ideological war,
softwar gives another way to strike back.
The book also shows that the Soviets have no choice. They know that by buying
or getting by other means this software, they are taking a big risk. But if
they stop getting this software, the time it will take them to develop it by
themselves will increase the gap. This is a fact. So soft bombs, like atomic
bombs, can be a means of deterrence. For political people who are just
dicovering this new strategy, the book is that of a new generation showing to
the old one that what was a tool has become a weapon."


[This reminds me of an anecdote I heard Captain (now Cmdr) Grace Hopper tell.
It seems some company began to pass off a Navy-developed COBOL compiler
verifier as their own, removing the print statement that gave credit to
the Navy.  When the Navy came out with an improved version, the company
had the gall to ask for a copy.  Her development group complied, but
embedded concealed checks in the code so that it would fail to work if
the credit printout were ever altered.  -- KIL]

------------------------------

Date: Wed 20 Jun 84 20:07:35-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: softwar  @=

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The article Jean-Luc (or whoever) translates sounds like a typical piece of
National Enquirer-style "reporting", namely it describes something that is
*just* feasible theoretically but against which countermeasures exist, and
which has wider ramifications than are mentioned.   I'm sure the Russians are
too paranoid to allow network access to important computers in such a way as to
trigger these "bombs".

But:  it is widely rumoured that IBM puts time-delayed self-destruct operations
into some of its programs so as to force you to buy the new release when it
comes out (and heaven help you if it's late?).   And in John Brunner's book
"The Shockwave Rider", one of America's defence systems is a program that would
bring down the entire national network, thus making it impossible for an
invader to control the country.

I love science fiction discussions, but I love them even more when they're not
on BBoard.
                                - Richard

[Another SF analogy: there is a story about the consequences of developing
some type of "ray" or nondirectional energy field capable of igniting
all unstable compounds within a large radius, notably ammunition, propellants,
and fuels.  This didn't stop the outbreak of global war, but did reduce it
to the stone age.

All that has nothing to do with AI, of course, except that computers may
yet be the only intelligent beings on the planet. -- KIL]

------------------------------

End of AIList Digest
********************

∂24-Jun-84  1136	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #78
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Jun 84  11:36:15 PDT
Date: Sun 24 Jun 1984 10:19-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #78
To: AIList@SRI-AI


AIList Digest            Sunday, 24 Jun 1984       Volume 2 : Issue 78

Today's Topics:
  AI Programming - Characteristics,
  Commonsense Reasoning - Hypothetical Math,
  Cognition - Humor & Memory & Intuition,
  Seminar - Full Abstraction and Semantic Equivalence
----------------------------------------------------------------------

Date: 20 Jun 84 12:14:49-PDT (Wed)
From: hplabs!hpda!fortune!amd70!intelca!glen @ Ucb-Vax.arpa
Subject: Re: Definition of an AI program
Article-I.D.: intelca.317

As a half-serious/half humorous suggestion:

Consider the fact that most of man's machines are built to do the same
thing over and over and do it very well.  Some random examples:
  - washing machine
  - automobile hood fastner in production line
  - pacman video game

AI programs (hopefully) don't fit the mold, they don't spend their lives
performing the same routine but change as they go.


  ↑ ↑    Glen Shires, Intel, Santa Clara, Ca.
  O O     Usenet: {ucbvax!amd70,pur-ee,hplabs}!intelca!glen
   >      ARPA:   "amd70!intelca!glen"@BERKELEY
  \-/    --- stay mellow

------------------------------

Date: Fri 22 Jun 84 11:28:46-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: a third of ten

Please.   Everyone knows that 2*2=5 for sufficiently large values of 2.

More to the point, if you take the square root of 5 and round to the nearest
integer, you get 2.   Again, if you take half of 5 and round to nearest using
accepted method, get 3.   A third of ten now becomes 3 as well.   How many AI
people does it take to change a lightbulb?
                                                - Richard


[One graduate student, but it takes eight years.  -- KIL (from John
Hartman, CS.Hartman@UTexas-20) ]

------------------------------

Date: 21 Jun 84 10:51:26-PDT (Thu)
From: decvax!decwrl!dec-rhea!dec-rayna!swart @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: decwrl.1845

I am reminded of an old children's riddle:

Q. If you call a tail a leg, how many legs does a horse have?

A. Four. Calling a tail a leg doesn't make it so.


                Mark Swartwout
                UUCP {allegra,decvax,ihnp4,ucbvax}!decwrl!rhea!rayna!swart
                ARPA MSWART@DEC-MARLBORO

------------------------------

Date: 21 Jun 84 22:07 PDT
From: Shrager.pa@XEROX.ARPA
Subject: Memory

This might amuse.  Authorship credit to Dave Touretzky@CMU.

  From: Dave Touretzky (DT50)@CMU-CS-A
  To: Jeff Shrager <shrager.PA>
  Subject: Q-registers in the brain


ENGRAM (en'-gram) n.  1.  The physical manifestation of human memory -- "the
engram."  2.  A particular memory in physical form.  [Usage note:  this term is
no longer in common use.  Prior to Wilson & Magruder's historic discovery,
the nature of the engram was a topic of intense speculation among
neuroscientists, psychologists, and even computer scientists.  In 1994
Professors M. R. Wilson and W. V. Magruder, both of Mount St. Coax University
in Palo Alto, proved conclusively that the mammalian brain is hardwired to
interpret a set of thirty seven genetically-transmitted cooperating TECO
macros.  Human memory was shown to reside in 1 million Q-registers as
Huffman-coded uppercase-only ASCII strings.  Interest in the engram has
declined substantially since that time.]

    --- from the New Century Unabridged English Dictionary,
        3rd edition, A.D. 2007.  David S. Touretzky (Ed.)

------------------------------

Date: 19 Jun 84 16:02:49-PDT (Tue)
From: ihnp4!houxm!mhuxl!ulysses!gamma!pyuxww!pyuxn!rlr @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: pyuxn.769

>       (2) Intuition - by this I mean huge leaps into discovery
>       that have nothing to do with the application of logical
>       association or sensual observation. This kind of stuff
>       happens to all of us and cannot easily be explained by
>       the physical/mechanical model of the human mind.
>
>       I agree that if you could build a computer big enough and fast
>       enough and taught it all the "right stuff", you could duplicate
>       the human brain, but not the human mind.

Intuition is nothing more than one's subconscious employing logical thought
faster than the conscious brain can understand or realize it.  What's all the
fuss about?  And where's the difference between the "brain" and the "mind"?
What can this "mind" do that the physical brain doesn't?

A good dose of Hofstadterisms and Smullyanisms ("The Mind's 'I'" provides
good examples) puts to rest some of those notions of mind and brain.


"I take your opinions and multiply them by -1."
                                        Rich Rosen    pyuxn!rlr

------------------------------

Date: 19 Jun 84 13:55:43-PDT (Tue)
From: hplabs!hao!seismo!ut-sally!utastro!bill @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: utastro.114

>       (1) Subconscious memory - a person can be enabled (through
>       hypnosis or by asking him the right way) to remember
>       infinite details of any experience of this or prior life
>       times. Does the mind selectively block out trivia in order
>       focus on what's important currently?

One of the reasons that evidence obtained under hypnosis is
inadmissable in many courts is that hypnotically induced
memories are notoriously unreliable, and can often be completely
false, even though they can seem extremely vivid.  In some states,
the mere fact that a witness has been under hypnosis is enough to
disqualify the individual's testimony in the case.

I have personal, tragic experience with this phenomenon in my own
family.  I don't intend to burden the net with this, but if anyone
doubts what I say, I will be glad to discuss it by E-mail.


        Bill Jefferys  8-%
        Astronomy Dept, University of Texas, Austin TX 78712   (USnail)
        {allegra,ihnp4}!{ut-sally,noao}!utastro!bill    (uucp)
        utastro!bill@ut-ngp                        (ARPANET)

------------------------------

Date: 20 Jun 84 9:22:50-PDT (Wed)
From: hplabs!hao!seismo!ut-sally!riddle @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: ut-sally.2301

Now that Chuqui's obligingly created net.sci, why don't we move this
discussion there?  Is there any reason for it to go on in five
newsgroups simultaneously?  If interest continues, perhaps this topic
will form the basis for net.sci.psych.

Followups to net.sci, please.

  --- Prentiss Riddle ("Aprendiz de todo, maestro de nada.")
  --- {ihnp4,harvard,seismo,gatech,ctvax}!ut-sally!riddle

------------------------------

Date: Thu, 21 Jun 84 15:47 CST
From: Nichael Cramer <cramer%ti-csl.csnet@csnet-relay.arpa>
Subject: Memory

>
>From: hplabs!hpda!fortune!crane @ Ucb-Vax.arpa
>
>        (1) Subconscious memory - a person can be [...]

But, brain is mind is brain is mind is brain is mind is brain...
[what else have you got to work with?]

                                        So long and thanks for all the fish,
                                        NLC

------------------------------

Date: 22 Jun 1984 1825-PDT (Friday)
From: gd@sri-spam (Greg DesBrisay)
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: aplvax.663


>The other thing to note is that while each 'memory cell' in a computer
>has ~2 connections, each 'memory cell' in the brain has ~100.  Since
>processing power is relative to (cells * connections), a measure of
>relative capacities is not sufficient for comparison between the brain
>and the CRAY.          -Lloyd W. Taylor


In addition, many connections in the human brain are analog in
character, so any comparison with a binary digital computer must
multiply the number of connections by the number of bits necessary to
digitize the analog range of each synapse.  To do that, one would have
to know what analog resolution is required to accurately model the
behavior of a synapse.  I'm not sure if any one has figured that one out
yet.


Greg DesBrisay
SRI

------------------------------

Date: 20 Jun 84 9:20:43-PDT (Wed)
From: decvax!mcnc!unc!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax.arpa
Subject: Re: Mind and Brain
Article-I.D.: eosp1.954

I'm not comfortable with Rich Rosen's assertion that intuition
is just the mind's unconscious LOGICAL reasoning that happens
too fast for the conscious to track.  If intuition is simply
ordinary logical reasoning, we should be just as able to
simulate it as we can other tyes of reasoning.  In fact, attempts
to simulate intuition account for some rather noteworthy successes
and failures, and seem to require a number of discoveries before
we can make much real progress.  E.g.:

I think it is fair to claim that chess players use
intuition to evaluate chess positions.  We acknowledge that
computers have failed to be intuitive in playing chess in at
least two ways that are easy for people:
        - knowing what kinds of tactical shots to look for
          in a position
        - knowing how to plan longterm strategy in a position

In backgammon, Hans Berliner has a very successful program that
seems to have overcome the comparable backgammon problem.
His program has a way of deciding, in a smooth, continuous fashion,
when to shift from one set of assumptions to another while
analyzing.  I am not aware of whether other people have been able
to develop his techniques to other kinds of analysis, or whether
this is one flash of success.  Berliner has not been comparably
successful applying this idea to a chess program.
(The backgammon program defeated then world champion in a short
match, in which the doubling cube was used.)

  [There was general agreement that the program's play was inferior,
  however.  Another point: while smooth transitioning between strategies
  is more "human" and easier to follow or explain (and thus to debug
  or improve), I can't see that it is inherently as powerful as
  switching to a new optimal strategy at each turn.  -- KIL]

Artists and composers use intuition as part of the process of
creating art.  It is likely that one of the benefits they gain
from intuition is that a good work of art has many more internal
relationships among its parts than the creator could have planned.
It is hard to see how this result can be derived from "logical"
reasoning of any ordinary deductive or inductive kind.  It is
easier to see how artists obtain this result by making various
kinds of intuitive decisions to limit their scope of free choice
in the creative process.

Computer-generated art has come closest to emulating this process
by using f-numbers rather than random numbers to generate
artistic decisions.  It is unlikely that the artist's intuition
is working as "simply" as deriving decision from f-numbers.
It remains a likely possibility that a type of reasoning that we
know little about is involved.

We are still pretty bad at programming pattern recognition, which
intuitive thinking does spectacularly well.  If one wishes to assert
that the pattern recognition is done by well-known logical processes,
I would like to see some substantiation.
                                        - Toby Robison (not Robinson!)
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: 20 Jun 84 18:14:17-PDT (Wed)
From: decvax!linus!utzoo!henry @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: utzoo.3971

John Crane cites, as evidence for the human mind being impossible to
duplicate by computer, two phenomena.

        (1) Subconscious memory - a person can be enabled (through
        hypnosis or by asking him the right way) to remember
        infinite details of any experience of this or prior life
        times. Does the mind selectively block out trivia in order
        focus on what's important currently?

As far as I know, there's no evidence of this that will stand up to
critical examination.  Even disregarding the "prior life times" part,
for which the reliable evidence is, roughly speaking, nonexistent,
the accuracy of recall under hypnosis is very doubtful.  True, the
subject can describe things in great detail, but it's not at all proven
that this detail represents *memory*, as opposed to imagination.  In
fact, although it's quite likely that hypnosis can help bring out things
that have been mostly forgotten, there is serious doubt that the memories
can be disentangled from the imagination well enough for, say, testimony
in court to be reliable when hypnosis is used.

        (2) Intuition - by this I mean huge leaps into discovery
        that have nothing to do with the application of logical
        association or sensual observation. This kind of stuff
        happens to all of us and cannot easily be explained by
        the physical/mechanical model of the human mind.

The trouble here is that "...have nothing to do with the application
of logical association or sensual observation..." is an assumption,
not a verified fact.  There is (weak) evidence suggesting that intuition
may be nothing more remarkable than reasoning and observation on a
subconscious level.  The human mind actually seems to be much more of
a pattern-matching engine than a reasoning engine, and it's not really
surprising if pattern-matching proceeds in a haphazard way that can
sometimes produce unexpected leaps.

                                Henry Spencer @ U of Toronto Zoology
                                {allegra,ihnp4,linus,decvax}!utzoo!henry

------------------------------

Date: 20 Jun 84 17:14:58-PDT (Wed)
From: ucbcad!tektronix!orca!shark!hutch @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: shark.838

| Intuition is nothing more than one's subconscious employing logical
| thought faster than the conscious brain can understand or realize it.
| What's all the fuss about?  And where's the difference between the
| "brain" and the "mind"? What can this "mind" do that the physical brain
| doesn't?
|                                       Rich Rosen    pyuxn!rlr

Thank you, Rich, for so succinctly laying to rest all the questions
mankind has ever had about self and mind and consciousness.

Now, how about proving it.  Oh, and by the way, what is a "subconscious"
and how do you differentiate between a "conscious" brain and a "subconscious"
in any meaningful way?

And once you have told us exactly what a physical brain can do, then we
can tell you what a mind could do that it doesn't.

Hutch

------------------------------

Date: 21 June 1984 0802-EDT
From: Lydia Defilippo at CMU-CS-A
Subject: Seminar - Full Abstraction and Semantic Equivalence

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Speaker:  Ketan Mulmuley
Date:     Friday, June 22
Time:     11:00
Place:    5409
Title:    Full Abstraction and Semantic Equivalence

The Denotational Approach of Scott-Strachey in giving semantics
to programming languages is well known. In this approach each
construct of the programming language is given a meaning in a
domain which has nice mathematical properties.
Semantic equivalence is the problem of showing that this map --
the denotational semantics -- is faithful to the operational semantics.
Because known methods showing such equivalences were too complicated,
very few such proofs have been carried out.
Many authors had expressed a need for mechanization of these proofs.
But it remained unclear whether such proofs could be mechanized at all.
We shall give in this thesis a general theory to prove such equivalences
which has a distinct advantage of being mechanizable. A mechanized tool
was acually built on top of LCF to aid the proofs of semantic
equivalence.

Other central problem of denotational semantics is the problem of
full abstraction, i.e., determining whether the meanings given to two different
language constructs by the denotational semantics are equal whenever
they are operationally equivalent. This has been known to be a hard
problem and the only known general method of constructing such models
was the {\it syntactic } method of Milner.  But whether  such
models could be constructed semantically remained an important open
problem. In this thesis we  show that  this is indeed the
case.

------------------------------

End of AIList Digest
********************

∂25-Jun-84  0021	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #79
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 Jun 84  00:21:32 PDT
Date: Sun 24 Jun 1984 22:49-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #79
To: AIList@SRI-AI


AIList Digest            Monday, 25 Jun 1984       Volume 2 : Issue 79

Today's Topics:
  Combinatory Logic - Request,
  AI Tools - NIAL,
  AI and Society - Relevance of "souls" to AI,
  Problem Solving - Commonsense Reasoning,
  AI Programming - Spelling Correction,
  Cognition - Intuition & Mind vs. Brain
----------------------------------------------------------------------

Date: 28 Jun 84 6:56:08-EDT (Thu)
From: hplabs!hao!seismo!cmcl2!floyd!vax135!ukc!srlm @ Ucb-Vax.arpa
Subject: combinatory logic
Article-I.D.: ukc.4280

[-: kipple :-]            [I couldn't bear to delete this one. -- KIL]


In the hope that many of you are also interested in combinatory logic...
please have a look at this and mail me any suggestions, references, etc.

                          ------------------

[by a. pettorossi in notre dame j. form. logic 22 (4) 81]

define:
     marking

        is a function that assigns, for each combinator in a term (tree)
        the number of left choices (of path) that one has to make to go
        from the root to the combinator.
        ex.:
        marking SII = <S,2><I,1><I,0>

     the set right applied subterms of a combinator X is defined as:

        1) if X is a basic combinator or a variable ras(X) = {X)
        2) if X is (YZ) then ras(YZ) = union (ras(X)) Z

     a combinator X with reduction axiom X x1 x2 x3 ... xk -> Y
     has non-ascending property iff

        for all i, 1<=i<=k, if <xi,p> occurs in marking (X x1...xk)
        and <xi,q> occurs in marking Y, then p >= q.

     a combinator (X x1 x2 ... xk -> Y) has compositive effect iff

        a right applied subterm of Y is not a variable.

                          ------------------
Theorem:
        given a subbase B={X1,...Xk} such that all Xi in B have non-ascending
        property and no compositive effect, every reduction strategy applied
        to any Y in B+ leads to normal form.

                          ------------------
Open Problem:
        does the theorem hold if non-ascending property is the only condition?

                          ------------------
My personal questions:

        if one specifies leftmost-outermost reduction only, would the Open
        Problem be any easier?

        how much of combinatory logic can we do with B?

        and with non-ascending property only?


        silvio lemos meira

        UUCP:   ...!{vax135,mcvax}!ukc!srlm
        Post:
                computing laboratory
                university of kent at canterbury
                canterbury ct2 7nf uk
        Phone:
                +44 227 66822 extension 568

------------------------------

Date: 20 Jun 84 10:35:51-PDT (Wed)
From: decvax!linus!utzoo!utcsrgv!qucis!carl @ Ucb-Vax.arpa
Subject: what is NIAL?
Article-I.D.: qucis.70

Nial is the "Nested Interactive Array Language."
It is based on the nested, rectangular arrays of T. More, and
  has aspects of Lisp, APL, FP, and Pascal.
Nial runs on lots of Unix(&etc) systems, VAX/VMS, PC-DOS, and
  VM/CMS (almost).
Nial is being used primarily for prototyping and logic programming.
Distribution is through Nial Systems Limited, PO Box 2128, Kingston,
  Ontario, Canada, K7L 5J8. (613) 549-1432.
Here are some trivial samples (names in uppercase are second order
  functions, called transformers):

  5 in 0 1 2 5  =  truth
  1 3 5 EACHLEFT in 0 1 2 5  =  truth falsehood truth

  average is divide [sum, tally]
  average 1 2 3 4 5  =  3.
    [sum, tally] 1 2 3 4 5  =  15 5
    divide 15 5  =  3.

  MONO is equal EACH
  MONO type  1  2.0  3.1j4.3  `a  "phrase  ?fault  truth  =  falsehood
  MONO type 1 3 5 2  =  truth

------------------------------

Date: 10 Jun 84 11:39:00-PDT (Sun)
From: hplabs!hp-pcd!hpfcla!hpfclq!robert @ Ucb-Vax.arpa
Subject: Re: Re: elevance of "souls" to AI
Article-I.D.: hpfclq.68500003

Is a soul going to be the real issue here?

> I submit that the concept of "soul" is irrelevant only if AI is doomed
> to utter failure.  Use your imagination and consider a computer program
> that exhibits many of the characteristics of a human being in
> its ability to reason, to converse, and to be creative and unexpected in
> its actions.  How will you AI-ers defend yourself if a distinguished
> theologian asserts that G-d has granted to your computer program a soul?

To those AIers who don't believe in God it probably won't matter much what a
distinguished theologain asserts.  I think many that believe in God will
wonder why God would come down and bless a computer program with a soul.
They will doubt the theologian.  And for those that do believe that
the program has a soul, what are they to defend themselves from?  Are they
to defend God for doing it?  Or they may just agree with the theologian
saying, "Yep, that sure is neat that it has a soul."

I think a bigger problem will be empathy for the program.  A program that
is your friend could be just as hard to kill as any other being.
This could be particularly true of people who are only end users of
these friend programs and don't understand how it works.  It is hard
to guess the psychological effects of man-machine freindships.  It is a very
lonely world and a computer might be your only friend in the world!

> If he might be right, the program, and its hardware must not be destroyed.

Is cremation bad because that destroys the hardware of
something that had a soul?

> Perhaps it should not be altered either, lest its soul be lost.
> The casual destruction, recreation and development of computer programs
> containing souls will horrify many people.

Altering, such as in psychotherapy for humans and mods to code or inference
tables in programs, is bad?  Operating on people or making mods to hardware
is bad?   I would imagine not.  What we do have is the possibility of
of modifying and experimenting with models of human psychologies to a
degree never before available.  What are the issues involved in the
torture of beings created out of software?  The indiscriminate
experimentation on man-made psyches may bring about a new form of the
antivivisectionist movement.  This is all independant of the soul issue
for many people.  "If it really appears to be human how can you kill it?"
will be the underlying measure, I think.  Again, who knows how the
intervening history will condition man to the thought of man made intelligence.

> You will face demonstrations,
> destruction of laboratories, and government interference of the worst kind.

Nice drama here.

> Start saving up now, for a defense fund for the first AI-er accused by
> a district attorney of soul-murder.

Now I speak from the point of view of someone who doesn't hold much stock in
the idea of a soul.  I do believe in the importance of the human as a
thinking, feeling being, so we may really agree.  A lot of what you said
seems to be all based on the issue of a soul.  I'm just not convienced that
that many people will see it as an issue of the soul.  I can see more easily
the DA above arguing that the man-made intelligence is alive
and therefore can be murdered.

> On second thought, you have nothing to fear;  no one in AI is really trying
> to make computers act like humans, right?

You bet AIers are out to make computers act like humans, bit by bit
and byte by byte.  They are also studying
even more general concepts.  What is intelligence?  What is
the nature of thought?  This goes beyond just making a machine act like
a human.
                                -Robert (animal) Heckendorn
                                hplabs!hpfcla!robert

[A couple of notes here:  First, SF writers have certainly tried to
explore the man/machine friendship issue in many forms.  I remember
stories about robots, computer environments (e.g., HAL), direct
computer/brain links, relationships with intelligent spaceships, etc.
Second, the churches have seldom been strongly opposed to killing
either in war or as capital punishment.  At times they have taken the
position that torture and death are unimportant as long as confession
has cleared the soul for entry to heaven.  They have been less tolerant
of the torture of soulless animals.  -- KIL]

------------------------------

Date: 21 Jun 84 13:58:15-PDT (Thu)
From: ihnp4!houxm!mhuxl!mhuxm!mhuxi!charm!slag @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: charm.377


        In solving a puzzle like:

If 3 is half of 5, what is a third of ten?

 One might try a series of solutions like the ones suggested,
but I would consider them incorrect if they were logically
inconsistant.  The meaning of the problem would be undermined
if one redefined three but not two, five, ten, half or third.

        One approach I would take would be to explore
alternate bases.  For instance, in base nine, three is a third
of ten.  This approach does not solve the above problem though
so it must be marked as wrong, and thrown out.

        At what point should a problem like that be given up
on as illogical?

------------------------------

Date: 21 Jun 84 12:45:00-PDT (Thu)
From: pur-ee!uiucdcs!uicsl!keller @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning? - (nf)
Article-I.D.: uicsl.12300001


1/2 * 5 = 2.5 round up to 3
1/3 * 10 = 3.333... round down to 3

Just another possible interpretation.

  -Shaun Keller

------------------------------

Date: Sun 24 Jun 84 22:34:57-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Commonsense Reasoning?

    1/2 * 5 = 2.5 round up to 3
    1/3 * 10 = 3.333... round down to 3
                  -Shaun Keller


Shaun's solution is the same as Richard Treitel's solution in the
previous issue, derived independently.  I like it better than my
own solution except for the fact that it makes the problem less
metaphysical.

Roger Hale's solution of (temporarily) subtracting one from each
number was essentially a solution to "If 3-X were half of 5-X, what
would X plus a third of 10-X be?"  It seems as valid as my own
solution to "If 3 were half of 5X, what would a third of 10X be?"

I am surprised that such good alternatives to my explanation were found,
especially after I had exposed everyone to my own way of thinking.
For 18 years I've thought I had >>the<< answer.

                                        -- Ken Laws

------------------------------

Date: 21 Jun 84 17:15:09-PDT (Thu)
From: decvax!mcnc!unc!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax.arpa
Subject: Re: Commonsense reasoning
Article-I.D.: eosp1.955


>> Q: If you call a tail a leg, how many legs does a sheep have?

>> A: Four.  Calling a tail a leg doesn't make it a leg.

I find this answer less satisfactory than the two given below.
It seems to me that "calling an X a Y" is exactly how we define
what most things are.  SO:

A: One. A tail is leg, those other four things are obviously
something else.

OR:

A: Five. If you call it a leg, it is a leg (albeit of a different
kind), in addition to those other four legs.
                                        - Toby Robison (not Robinson!)
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: Sun 24 Jun 84 22:30:21-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: Spelling Correction vs. Fact Correction

If one changed the content of a Spelling corrector to be a list of
predicates containing `facts' rather than sequences of letters, and then
one used such a program against the output of a parser which reduced
incoming text to similarly structured predicates, and the `fact checker'
then emitted confirmations or `corrections' of the facts in the parsed text
(e.g. South-Of San-Francisco San Jose; Capital-of USSR Moscow; etc.)
would this be a knowledge-based system? What has changed from sequences
of letters being acceptable `truths' to the mechanical use of predicates?

I fail to see how this is very different from having a spelling corrector
look over a string of letters and note that MAN and DOG are correct truths
whereas DOA (= Capital-of USSR San-Francisco) and MNA = (South-Of
San-Jose San-Francisco) are actually `misspellings' of DOG and MAN.

It might well be one doesn't want to call a system that uses this
strategy to proofcheck student's essays about geography an AI program,
but it sure would be hard to tell from its performance whether it
was an AI program or a non-AI program `pretending' to be an AI program.

------------------------------

Date: 21 Jun 84 13:30:56-PDT (Thu)
From: hplabs!hao!seismo!cmcl2!floyd!whuxle!spuxll!ech @ Ucb-Vax.arpa
Subject: Re: Intuition
Article-I.D.: spuxll.510

We have a couple of different issues here: is there a distinction between
'mind' and 'brain', and -- if you advocate the position that there is no
difference -- what possible mechanisms account for intuition?

On the first, I will (like others) recommend "The Mind's I".  The issue
is addressed until ANYBODY will get confused.  You may come away with the
same belief, but you will have DOUBTS, regardless of your current position.

As for "intuition," we are (so far) using an inaccurate picture: those
"leaps of imagination" are not necessarily correct insights!  Have you never
had an intuitive feeling that was WRONG in the face of additional data?

Let's look at a few candidates; are any of these either supported or
disproved by current evidence?

1. Intuition is just deduction based on data one is not CONSCIOUSLY aware of.
   Body language is a good example of data we all collect but often are not
   aware of consciously; we may use terms like "good/bad vibes"...

2. Intuition is just induction based on partial data and application of a
   "model" or "pattern" from a different experience.

3. Intuition is a random-number-generator along with some "sanity checks"
   against internal consistency and/or available data.

I submit that about the only thing we KNOW about intuition is that it is
not a consciously rational process.  Introspection, by definition, will not
yield up any distinctions between any of the above three mechanisms, or
between them and the effects of a soul or divine inspiration.  The traditional
technical and ethical constraints against breaking open that skull to measure
it are only beginning to break down (the technical ones, that is!).

I'll add one thing, then get off the box.  I USE my intuition: I am willing
to take ideas whether I can account for the source/process or not.  However,
I apply the usual rational processes to the intuitive notion before swearing to
its truth: check for self-consistency, consistency with available data,
and where possible set up "experiments" that might falsify the premise.
The Son of Sam had the divine inspiration that he had to kill a few folks...

=Ned=

------------------------------

Date: Sun, 24 Jun 84 13:17:28 PDT
From: Michael Dyer <dyer@UCLA-CS.ARPA>
Subject: Intuition

Those who are trying to argue that "intuition" is something that cannot
be mechanized or understood in terms of computational structures and
operations should try substituting the word "soul" everywhere for
"intuition"  and see if they still believe their own arguments.
If they still do,  then I ask them to re-read Minksy's comments
on the "soul" a few digest issues back.  The task of AI researchers
is to show how such vague notions CAN be understood computationally,
not to go around arguing against this simply because such notions
as "intuition" are so vague as to be computationally useless at
such at a bs level of discussion.  It's like my postulating the
notion of "radio" and then looking at each transistor, crystal, wire or
what-have-you inside the radio, and then saying "THAT part can't be a
radio; that OTHER part there can't be one either.  I guess the idea of
'radio' can never be realized by the combination of such parts."
I second the suggestion that amateur philosophers of mind read
Hofstadter, or better yet, start building computer programs which
exhibit aspects of "intuition" and then discuss their own programs.

------------------------------

Date: 22 Jun 84 8:41:28-PDT (Fri)
From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax.arpa
Subject: Re: Mind and Brain
Article-I.D.: ccivax.171

In reference to Mr. Robison's comments:

Is it possible that "intuition" is the word we
use to explain what cannot be explained more
formally or logically?

I'm thinking of the explanation of evolution
based on Natural Selection.  An explanation based
on probability is NOT an explanation at all.
It is an admission that there is no logical or
formal explanation possible.  Of course, we
still accept evolution as a fact of life, but
we don't have any mechanical (or dynamical in the
sense of physics) model for it.

Perhaps the same is true of our experience of
intuition.  Something is going on when we have
a flash of insight, but we don't have any
dynamical model that can be used for prediction.

I think that Mr. Robison is correct when he says
that we just don't know much about how our
mind/brain system works.  We need to keep asking
any and all questions that come to mind (pun not
intended) -- that's what science is all about.

        Bill Anderson

        ...!{ {ucbvax | decvax}!allegra!rlgvax }!ccivax!band

------------------------------

Date: 22 Jun 84 10:11:16-PDT (Fri)
From: decvax!mcnc!unc!ulysses!gamma!pyuxww!pyuxn!rlr @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: pyuxn.770

[from shark!hutch]
> | Intuition is nothing more than one's subconscious employing logical
> | thought faster than the conscious brain can understand or realize it.
> | What's all the fuss about?  And where's the difference between the
> | "brain" and the "mind"? What can this "mind" do that the physical brain
> | doesn't?
> |                                     Rich Rosen    pyuxn!rlr
>
> Thank you, Rich, for so succinctly laying to rest all the questions
> mankind has ever had about self and mind and consciousness.

You're welcome.  It only takes a miniscule amount of logic and a careful
shave with my Occam's Electric Razor.  The point is, for all this talk of
"soul" and "mind", I've never seen anything that points to a *need* (from a
logical point of view) for anything external to "physicalism" to describe
the goings-on in the human brain.

> Now, how about proving it.  Oh, and by the way, what is a "subconscious"
> and how do you differentiate between a "conscious" brain and a "subconscious"
> in any meaningful way?
> And once you have told us exactly what a physical brain can do, then we
> can tell you what a mind could do that it doesn't.

Let's place the burden of proof on the proper set of shoulders.  If anyone is
proposing a view of intelligence involving a "mind" (defined as that part of
intellect not part of the physical brain), then they had better describe some
phenomena which physical processes cannot account for.

[from eosp1!robison]
> I'm not comfortable with Rich Rosen's assertion that intuition
> is just the mind's unconscious LOGICAL reasoning that happens
> too fast for the conscious to track.  If intuition is simply
> ordinary logical reasoning, we should be just as able to
> simulate it as we can other types of reasoning.  In fact, attempts
> to simulate intuition account for some rather noteworthy successes
> and failures, and seem to require a number of discoveries before
> we can make much real progress.  E.g.:

My statement was probably a little too concise there.  It seems like the
brain may be able to extract patterns through an elaborate pattern matching
process that can be triggered by random (or pseudo-random) "browsing", such
that a small subsection of a matched thought pattern can trigger the recall
(or synthesis) of an entire thought element.  (Whatever that means...)

> Artists and composers use intuition as part of the process of
> creating art.  It is likely that one of the benefits they gain
> from intuition is that a good work of art has many more internal
> relationships among its parts than the creator could have planned.
> It is hard to see how this result can be derived from "logical"
> reasoning of any ordinary deductive or inductive kind.  It is
> easier to see how artists obtain this result by making various
> kinds of intuitive decisions to limit their scope of free choice
> in the creative process.

Logical may not be the right word, since the process does seem to be either
conscious or intentional.  The "click" or "flash" that often is said to
coincide with intuitive realizations seems like an interrupt from a sub-
conscious process that, after random (or pseudo-random) searching, has found
a "match".

"Submitted for your approval..."                  Rich Rosen    pyuxn!rlr

------------------------------

End of AIList Digest
********************

∂26-Jun-84  0054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #80
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Jun 84  00:53:57 PDT
Date: Mon 25 Jun 1984 22:33-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #80
To: AIList@SRI-AI


AIList Digest            Tuesday, 26 Jun 1984      Volume 2 : Issue 80

Today's Topics:
  Expert Systems - Request for Abstracts,
  Reasoning - Checking for Inconsistencies,
  AI Programming & Turing Tests - Spelling Correction,
  Business - Softwar,
  Cognition - Intuition & Hypnosis & Unconscious Mind,
  Games - Optimal Strategies,
  Philosophy - Purpose & Relation to AI
----------------------------------------------------------------------

Date: 25 Jun 84 17:05:31 EDT  (Mon)
From: Dana S. Nau <dsn@umcp-cs.arpa>
Subject: expert computer systems

I am currently writing a revised and updated version of my tutorial on
expert computer systems (which appeared in IEEE Computer in Feb.  1983).  As
part of the tutorial I plan to include a list of current expert computer
systems, including both their domains of expertise and references to any
available current papers describing them.  If you know of any successful
expert computer systems which you would like me to mention, please send me a
brief note giving the name of the system, the domain area, what kind of
success the system has had, and journal-style reference listings for any
relevant published papers.

------------------------------

Date: 23 Jun 84 16:24:59-PDT (Sat)
From: hplabs!tektronix!orca!shark!brianp @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: shark.845

When presented with a problem like the 'if 3 is half of 5' one,
how many dive right in and try to solve something, and how many
start by checking for inconsistencies?  Solving problems that
are 'inconsistent' sounds like it goes in the same pile as working
with insufficient data.  (problems with problems :-)

                                Brian Peterson
                                ...ucbvax!tektronix!shark!brianp

------------------------------

Date: Mon, 25 Jun 84 09:09 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: Spelling Correction vs. Fact Correction

"It might well be one doesn't want to call a system that uses this
strategy to proofcheck student's essays about geography an AI program,
but it sure would be hard to tell from its performance whether it
was an AI program or a non-AI program `pretending' to be an AI program."

        -- Robert Amsler <AMSLER@SRI-AI>

If one cannot distinguish a non-artificial intelligence program from an
artificial intelligence program by, say, interacting with it freely for
a couple of hours, then would not one be compelled to conclude that the
non-artificial intelligence program was displaying true artificial
artificial intelligence?

Mark

------------------------------

Date: Sun, 24 Jun 84 18:38:06 pdt
From: syming%B.CC@Berkeley
Subject: Re: Softwar

Four years ago, I worked as a programmer for Business School at Ohio State U.
When we ordered SAS/ETS(Statistical Analysis System/Econometic and Time Series)
from SAS company, they sent us a tape with a fixed time (two months or so?)
payment notice and stated that the program would vanish after that time. Of
course, we paid in time and they sent us a 20(?)-digit long key word and
instruction to make our trial copy a one-year-life-time program, since the
service contract was year by year. I had not realized this was a rare case.
Isn't it a common practice for a company to protect their products?

  -- syming hwang

------------------------------

Date: 23 Jun 84 8:13:06-PDT (Sat)
From: hplabs!hao!seismo!ut-sally!utastro!bill @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: utastro.127

Apropos this discussion, there has been research into hypnotically
aided recall that casts serious doubt on its reliability.
Two recent articles in *Science* magazine directly address this issue:
"The Use of Hypnosis to Enhance Recall", Oct 14, 1983, pp. 184-185 and
"Hypnotically Created Memory Among Highly Hypnotized Subjects", Nov 4,
1983, pp. 523-524.

        Bill Jefferys  8-%
        Astronomy Dept, University of Texas, Austin TX 78712   (USnail)
        {allegra,ihnp4}!{ut-sally,noao}!utastro!bill    (uucp)
        utastro!bill@ut-ngp                        (ARPANET)

------------------------------

Date: 22 Jun 84 10:56:44-PDT (Fri)
From: ihnp4!houxm!mhuxl!mhuxm!mhuxi!charm!slag @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: charm.380

        There seems to be some consensus here that however mind
and brain are related, there seems to be more process going
on then we are directly aware of.  In some way,  a filtering
mechanism in our mind/brain extracts certain salient images
from all the associations and connections.  It is these
structures (thoughts?) that I would call consciousness or
awareness.  Would anybody care to take a stab at a
model for this?


Logic is bunch of pretty flowers that smell bad.

                                slag heap.

------------------------------

Date: 22 Jun 84 13:58:21-PDT (Fri)
From: ihnp4!houxm!mhuxl!ulysses!gamma!pyuxww!pyuxn!rlr @ Ucb-Vax.arpa
Subject: Re: Intuition
Article-I.D.: pyuxn.773

[from Ned Horvath:]
> I will (like others) recommend "The Mind's I".  The issue
> is addressed until ANYBODY will get confused.  You may come away with the
> same belief, but you will have DOUBTS, regardless of your current position.
> As for "intuition," we are (so far) using an inaccurate picture: those
> "leaps of imagination" are not necessarily correct insights!  Have you never
> had an intuitive feeling that was WRONG in the face of additional data?

> 1. Intuition is just deduction based on data one is not CONSCIOUSLY aware of.
>    Body language is a good example of data we all collect but often are not
>    aware of consciously; we may use terms like "good/bad vibes"...
> 2. Intuition is just induction based on partial data and application of a
>    "model" or "pattern" from a different experience.
> 3. Intuition is a random-number-generator along with some "sanity checks"
>    against internal consistency and/or available data.

> I submit that about the only thing we KNOW about intuition is that it is
> not a consciously rational process.  Introspection, by definition, will not
> yield up any distinctions between any of the above three mechanisms, or
> between them and the effects of a soul or divine inspiration.

Thanks, Ned, for putting together what I was trying to say about intuition
in a clearer manner than I could.  The three examples you cite sound like
rationally feasible constructs to describe what we call intuition.  As far
as external possibilities (souls and deities), it seems sufficient to say that
until we see a facet which internal biochemical physical processes cannot
account for, there is no reason to presuppose the supernatural/external.

  "So, it was all a dream!" --Mr. Pither
  "No, dear, this is the dream; you're still in the cell." --his mother
                                Rich Rosen    pyuxn!rlr

------------------------------

Date: 22 Jun 84 8:36:42-PDT (Fri)
From: ihnp4!cbosgd!rbg @ Ucb-Vax.arpa
Subject: Re: Mind and Brain
Article-I.D.: cbosgd.42

The distinction between conscious and subconscious components of the mind
is an important one.  The substrate for consciousness is basically cortical,
which implies that it has access to language and reasoning processes, but
only some of the information about emotional states processed
primarily in lower brain centers.  To restate it: consciousness can monitor
only a fraction of the activity of the brain, and can effectively control
only a fraction of our behavior.  The example of body language not being
conscious is a good one (although trained observers can learn to make
conscious interpretations of some of these signals).

>2. Intuition is just induction based on partial data and application of a
>   "model" or "pattern" from a different experience.
>
>3. Intuition is a random-number-generator along with some "sanity checks"
>   against internal consistency and/or available data.
>
>I submit that about the only thing we KNOW about intuition is that it is
>not a consciously rational process.
> ech@spuxll.UUCP (Ned Horvath)

There is a variety of evidence that human memory is content addressable.
The results of the association process whereby different memories are
compared or brought together are accessable to consciousness, and indeed
may even make up a significant component of the "stream of consciousness".
The "sanity checks" are the conscious, rational evaluation of the
associations.  A lot of intuitions and ideas get junked...

The control of this association process is not rational: how many times
have you known that you knew a fact, but were unable to produce it on the
spot?  There may well be an element of randomness to this process (Hinton
at CMU has suggested a model based on statistical mechanics), but there
are also constraints on the patterns to be matched against.  You don't
generate lots of inappropriate associations, or you would not be very
successful in competing for survival.  And that is the force that shaped
our brain and thought capacity.

  --Rich Goldschmidt    cbosgd!rbg     a former brain hacker (now reformed?)

------------------------------

Date: 25 Jun 1984 10:39-EST
From: Robert.Frederking@CMU-CS-CAD.ARPA
Subject: Intuition; Hans Berliner

        There is a good article in the Winter 83 AI Magazine (4;4) about
non-logical AI (it is a rebuttal to Nils Nilsson's Presidential Address
at AAAI-83).  The authors point out that certain problems are intractable if
dealt with symbolically, whereas they are easily solved if one uses
real numbers and ordinary math.  I suspect that the human brain uses a
combination of analog and digital/symbolic processing, and that some
cases of intuition might arise from the results of an analog
computation into which introspection is not possible.
        As for Ken Laws's comment about switching to a new optimal
strategy at each step (rather than Berliner's smoothing of
transitions), one of the things he is trying to get around is the
"horizon effect", where the existance of a sharp cut-off in the
program's evaluation makes it think that postponing a problem solves it
(since you no longer see the problem if it is pushed back over your
horizon).  In other words, perhaps the optimal strategy at each point *is*
a non-linear combination of several discrete strategies.
        Also, I think it is a mistake to say that "pattern-matching"
and "reasoning" are different things.  After all, one must
pattern-match in order to find appropriate objects to combine with an
inference rule (obvious in OPS5, but also true in PROLOG).  The
question at hand is perhaps more whether one is allowed to use logically
unsound inferences (a.k.a. heuristics).

------------------------------

Date: Mon 25 Jun 84 08:10:45-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: ``Mind and brain'' mumbo-jumbo

> From: Michael Dyer <dyer@UCLA-CS.ARPA>
> The task of AI researchers
> is to show how such vague notions CAN be understood computationally,
> not to go around arguing against this simply because such notions
> as "intuition" are so vague as to be computationally useless at
> such at a bs level of discussion.  It's like my postulating the
> notion of "radio" and then looking at each transistor, crystal, wire or
> what-have-you inside the radio, and then saying "THAT part can't be a
> radio; that OTHER part there can't be one either.

Just so!

> From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax.arpa
> Is it possible that "intuition" is the word we
> use to explain what cannot be explained more
> formally or logically?

Why do these discussions always degenerate into suggestions of
absolute limits to reason, perception or what not? That the task is
*very* difficult we know, but we should not claim (without proof) that
something *cannot* be done just because we cannot see how it could be
done (within our lifetime...). Reminds me of those old ``if God had
intended man to fly...'' arguments...  Let's replace those ``what
*cannot* be explained'' by ``what we can't yet explain''!

  -- Fernando Pereira
  pereira@sri-ai

------------------------------

Date: 25 Jun 84 16:27:57 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Philosophy and other amusements.

Judging from the responses on this net, the audience is evenly split between
those who consider philosophy a waste of time in the context of AI, and those
who love to dig up and discuss the same old chestnuts and conundrums that
have amused amateur philosophers for many years now.

First, any AI program worthy of that apellation is in fact an implementation
of of philosophical theory, whether the implementer is aware of that fact or
not. It is  unfortunate  that most implementers do *NOT* seem to be
aware of this.

Take something as apparently clear and unphilosophical as a vision program
trying to make sense out of a blocks-world. Well, all that code deciding
whether this or that junction of line segments could correspond to a corner
is ultimately based on the (usually subconscious) presumption that there
is a "real" world, that it exhibits certain regularities whether perceived
by man or machine, that these regularities correspond to arrangements of
"matter" and "energy", and that some aspects of these regularities can and
should serve to constrain the behavior of some machine. There are  even
more buried assumptions about the time invariance of physical phenomena,
the principle of causation, and the essential equivalence of "intelligent"
behavior realized by different kinds of hardware/mushware (i.e. cells vs.
transistors). ALL of these assumptions represent philosophical positions,
which at other times, and in other places would have been severely
questioned. It is only our common western heritage of rationalism and
materialism that cloaks these facts, and makes it appear that the matter is
settled. The unfortunate end-effect of this is that some of our more able
practitioners (hackers) are unable to critically examine the foundations
on which they build their systems, leading to ever more complex hacks, with
patches applied where the underlying fabric of thought becomes threadbare.

Second, for those who are fond of unscrewing the inscrutable, it should be
pointed out that philosophy has never answered any fundamental questions
(i.e. identity, duality, one vs. many, existence, essence etc. etc.).
That is not its purpose; instead it should be an attempt to critically
examine the foundations of our opinions and beliefs about the world, and
its meaning. Take a real hard look at why you believe that "...Intuition
is nothing more than..." thus-and-such, and if you come up with:'it is
intuitively obvious', or 'everybody knows that', you've uncovered a mental
blind spot. You may in the end confirm your original views, but at least
you will know why you believe what you do, and you will have become aware
of alternative views.

Consider a solipsist AI program: philosophically unassailable, logically
self-consistent, but functionally useless and indistinguishable from
an autistic program. I'm afraid that some of the AI program approaches
are just as dead-end, because they reflect only too well the simplistic
views of their authors.

        Pete    BIESEL@RUTGERS.ARPA


(quick, more gasoline, I think the flames are dying down...)

------------------------------

End of AIList Digest
********************

∂28-Jun-84  1319	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #81
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Jun 84  13:19:03 PDT
Date: Thu 28 Jun 1984 11:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #81
To: AIList@SRI-AI


AIList Digest           Thursday, 28 Jun 1984      Volume 2 : Issue 81

Today's Topics:
  AAAI - Instructions,
  Standards - Maintaining High Quality in AI Products,
  Business - Softwar,
  Mathematics - Best fitting curve,
  Knowledge Representation - Frames Question,
  AI and Statistics - Bibliography,
  AI Programming - Spelling Correctors,
  Turing Test - Machines vs People
----------------------------------------------------------------------

Date: 26 June 1984 1651-EDT
From: Dave Touretzky at CMU-CS-A
Subject: AAAI paper presentations

Claudia Mazzetti, executive director of AAAI, warns that many of the
paper presentations at this year's conference will be in very large
concert or lecture halls.  Ordinary transparencies done in 20-point
font will *not* be readable.  AAAI very strongly recommends using
35mm slides for paper presentations.  If you must use transparencies,
a 36-point font is recommended.

------------------------------

Date: 27 Jun 84 16:02:56-PDT (Wed)
From: hplabs!hao!seismo!brl-tgr!abc @ Ucb-Vax.arpa
Subject: Re: Maintaining High Quality in AI Products
Article-I.D.: brl-tgr.3065

I suggest that the ACM provides an appropriate umbrella under which such
an effort can at least be planned.  It is sufficiently broad-based as to
be representative and not exclusive and its democratic procedures
provide protection from the types of abuses that could be possible.  (I
do not mean to slight the AAAI; it's just that ACM seems to have more of
the "mechanisms" that such an efort will need.)

Also, I have felt for many years that ACM should, at least in the US,
provide the kind of accreditation of Computer Science curricula that the
engineering societies provide for theirs.

------------------------------

Date: 26 June 1984 07:04-EDT
From: Herb Lin <LIN @ MIT-MC>
Subject: Softwar


    From: syming%B.CC at Berkeley
    They sent us a tape with a fixed time (two months or so?)
    payment notice and stated that the program would vanish after that time. Of
    course, we paid in time and they sent us a 20(?)-digit long key word and
    instruction to make our trial copy a one-year-life-time program, since the
    service contract was year by year.

I'm a bit confused.  How could this particular program make itself
vanish without some external reference to a date?  It seems that a
simple routine to change the date to the date of original purhcase
whenever the routine was invoked would do the trick.  Do you know if
anyone ever actually has their program vanish?  Maybe the whole thing
was a bluff?

------------------------------

Date: 24 Jun 84 12:26:54-PDT (Sun)
From: hplabs!sdcrdcf!sdcsvax!sdccsu3!ee171bbr @ Ucb-Vax.arpa
Subject: Best fitting curve
Article-I.D.: sdccsu3.1970

Given three points, what is the equation
of the best fit curve. (how does one
go about solving this?)

also, what is knuth's cocubic equation
and would that solve my problem?


John F.

------------------------------

Date: 25 Jun 84 6:08:52-PDT (Mon)
From: ihnp4!houxm!mhuxl!ulysses!allegra!mouton!mwg @ Ucb-Vax.arpa
Subject: Re: Best fitting curve - 3 points
Article-I.D.: mouton.90

Since three points determine a parabola, just plug them into
y = Ax↑2 + Bx + C and solve the system.  If you are in more than two
dimensions, you can probably do a transformation somehow into the
plane determined by the three points and solve; then translate back.
                -Mark

------------------------------

Date: 25 Jun 84 11:40:06-PDT (Mon)
From: decvax!yale-comix!leichter @ Ucb-Vax.arpa
Subject: Re: Best fitting curve
Article-I.D.: yale-com.4061

The notion of a "best fitting curve" through some points has no inherent
meaning.  You have to specify what kinds of curves you are willing to allow,
what kind of constraints you want to put on them, and what kind of measurement
of "fit" you are interested in.

Given n points in the plane, in general there is a unique (n-1)st degree poly-
nomial that passes through those points.  (Hence, there are infinitely many
nth degree polynomials, one for every other point in the plane that you can
consider to be an (n+1)st point.)  Such a polynomial is "best" in the sense
that it has 0 error at every point specified.  It is almost certainly not
what you would want for fitting data; it will generally oscillate between
your data points and will have uncontrollable behavior outside the range in
which your data points occur - i.e., the curve will not look at all "smooth"
to the eye.  Even if you want a curve that "looks good", you can use a
cubic (or higher-order) spline curve.  This is a curve defined by fitting
together polynomials; you take the first 4 points in order, pass a cubic
through them, take the last two and the next one, pick a cubic through those
that has the same derivative at the 4th point as the first cubic, etc...
(There are many other ways to choose spline curves.  This particular method
passes through all the points; in some cases, "smoothness" of some sort may
be more important than actually touching the points, so some kinds of splines
don't even pass through the given data points.  All spline curves are piece-
wise defined polynomials; there is no simple algebraic formula that defines
them; rather, there is a series of such formulas, one for each range of input
values.)

If the goal of "best fit" is to produce good interpolated values for a function,
rather than a curve that "looks" like it is determined by the points, all sorts
of other techniques exist.  For example, a Chebyshev approximation will have
the least maximum error (assuming a model in which you are approximating some
known, complex function by choosing some representative points on it and
an approximating polynomial.)  However, least maximum error is not the same
as least average absolute error, or least RMS error, or...

So, in summary:  If you give me three points, I can write down pretty much
ANY function and find some way to defend it's being the "best fit" to the
three given points.  You will have to specify your goals more precisely.
                                                        -- Jerry
                                        decvax!yale-comix!leichter leichter@yale

------------------------------

Date: Tue 26 Jun 84 11:44:04-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Frames Question

As a relief from the insubstantial debates on insubstantial souls, I
have a question about frames.

From my studies, I have observed two fundamentally ways of viewing
slots in frames: as heads of predicates, or as instance variables of
objects.

In the first view, if a FIDO frame has an AGE slot with the value 2,
then that is equivalent to making the assertion AGE(FIDO,2).  Thus the
name of the slot becomes the head of a predicate.  The advantages of
this view are twofold: the inheritance mechanism of a frame system
then appears as an inference rule, and slots can be made into frames
themselves, thus making meta-level knowledge easy (for instance, one
could say DATATYPE(AGE,NONNEGATIVE←NUMBER) to assert that AGE could
only hold values of a certain type).  This view of slots as
first-class concepts or frames is exemplified by RLL, and by simple
frame systems built on top of logic languages.

The second view is exemplified by FRL, its descendants, and any of a
number of object-oriented systems.  Here, slots are in some sense
"local" to frames or classes of frames, and an AGE of FIDO may have a
completely different meaning than an AGE of PINOT←NOIR.  Meta-level
knowledge generally resides in facets and other subparts of a slot, so
in a well-developed system, the "value" of a slot is often a rather
complex entity.  Interestingly enough, the facets (such as $VALUE,
$IF-ADDED etc) are usually quite consistent in meaning (which no doubt
simplifies meta-knowledge; one then needs only a few frames named
$VALUE, $IF-ADDED, ... to express the meanings of facets).

Each view can be simulated using the other.  To simulate the "slot as
frame view", the "objects view" can make all slots be defined for a
toplevel frame THING, and then have frames with the same names as the
slots; while the "slot as frame view" can have slots of slot frames
that point to many different ones (so for instance the AGE slot frame
has a slot VERSIONS that points to ANIMAL←AGE and WINE←AGE slots - all
the associated paperwork is handled automatically by the system).  Of
course, such simulations may be extremely inefficient! but I just
mention them to show that neither method is inherently more capable
than the other.

Now for the question: which view is favored by practitioners, and why?
Do any existing KRLs allow the view of slots to be changed according
to the problem, or do the two views require such fundamentally
different implementations that it's just better to stick to one or the
other?  Is it possible to do work using frames without being concerned
about the particular view imposed by the frame system? (my own
experience says no - converting an FRL-based program to an RLL-based
one is not easy!).  Are there problem domains in which one view is
distinctly superior to the other?  If so, what are they, and why is
that view superior?

Any answers or insights will be greatly appreciated...

                                                        stan shebs

------------------------------

Date: Wed 27 Jun 84 11:35:48-PDT
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: AI & statistics

Ken,

        Thank you for mentioning our work on RADIX in  the recent AILIST
response about AI and regression analysis. It prompted me to put together
a partial list of articles in AI and statistics, which I have been meaning
to do.  I've left out a number of articles by these authors in more obscure
journals and proceedings.  There is also work going on at Brunel University,
and at BBN, but I haven't seen any publications from them yet. If people
have additions to make, I would be happy to collect them and send them to
the list.

        If readers would like reprints, the following addresses may be
useful. Daryl Pregibon and Bill Gale can be reached at:

        Bell Laboratories
        600 Mountain Avenue
        Murray Hill, New Jersey
        07974

For D. Rodbard, write:

        D. Rodbard, M.D.
        National Institute of Child Health and HUman Development
        National Institutes of Health
        Bethesda, Maryland

Our address here at the RADIX project is:

        Robert L. Blum and Michael G. Walker
        RADIX Project
        Department of Computer Science
        Margaret Jacks Hall
        Stanford University
        Stanford, California
        94305



                                        Mike Walker
                                        WALKER@SUMEX-AIM.ARPA

[Blum 82a]     Blum, R.L.
               Discovery and Representation of Causal Relationships from a
                  Large Time-oriented Clincal Database: The RX Project.
               Springer-Verlag, 1982.
               Vol. 19 in the Medical Informatics series edited by D.A.B.
                  Lindberg and P.L. Reichertz.

[Blum 82b]     Blum, R. L.
               Discovery, Confirmation, and Incorporation of Causal
                  Relationships from a Large Time-Oriented Database: The RX
                  Project.
               Computers and Biomedical Research 15(2):164-187, 1982.

[Blum 82c]     Blum, R. L.
               Induction of Causal Relationships from a Time-Oriented Clinical
                  Database: An Overview of the RX Project.
               In Proceedings of the Symposium on Computer Applications in
                  Medical Care.  IEEE Computer Society, 1982.

[Blum 84]      Blum, R.L.
               Two-Stage Regression: Application to a Time-Oriented Clinical
                  Database.
               1984.
               in preparation.

[Chambers 81]  Chambers, J.M., Pregibon, D., and Zayas, E.
               Expert Software for Data Analysis: An Initial Experiment.
               In 43rd Session ISI.  Buenos Aires, Argentina, 1981.

[Gale 83]      Gale, W.A., and Pregibon, D.
               Using Expert Systems for Developing Statistical Strategy.
               In Joint Statistical Meetings.  Toronto, 1983.

[Hajek 82]     Hajek, P., and Ivanek, J.
               Artificial Intelligence and Data Analysis.
               In COMPSTAT 1982, pages 54-60.  International Association for
                  Statistical Computing, Physics-Verlag, Vienna, 1982.

[Rodbard 83]   Rodbard, D., Cole,B.R., and Munson,P.J.
               Development of a Friendly, Self-Teaching, Interactive
                  Statistical Package for Analysis of Clinical Research Data:
                  The BRIGHT STAT-PACK.
               In Seventh Annual Symposium on Computer Applications in Medical
                  Care, pages 701-704.  IEEE Computer Society, 1983.

------------------------------

Date: Wed, 27 Jun 84 09:36:41 PDT
From: Michael Pazzani <pazzani@AEROSPACE>
Subject: Spelling Correctors = Geography test correctors?


Ignoring philosophical issues (after all, this is AILIST not a bad remake
of "My Dinner With Andre")  I don't feel that the spelling correctors or
the geography test correctors are really that intelligent.  The geography
corrector seems to be very similar to the programs which grade SAT tests.
Surely, one wouldn't want to call a SAT test correcting program AI
even though it does a better and faster job than I would.

I think its more important to discuss how to make these programs smarter.
What would it take to have a spelling corrector find the intended word
instead of all of the possibilities?  A while ago, I worked on a program
to do word sense selection.  I wrote a spelling corrector for that
program which treated a misspelled word as new word whose senses were
the senses of all the possible corrections.  It worked well when
things like part of speech or selectional restrictions could
disambiguate.  How could one make this program smarter?  Is it possible
to try the "closer" possibilities first?  Can you propagate the part of
speech or semantic constraints into the search for possibilities?  How
would one store a large dictionary so it is efficient to find nouns,
which are vehicles which look like "planh"?  How can you detect a
spelling error if the mistake is another word?  (e.g. "I just typed
rm *.  Can you restore my flies from backup tape?)  How do people
do this anyway?

------------------------------

Date: 23 Jun 84 8:49:24-PDT (Sat)
From: hplabs!hao!seismo!rochester!rocksvax!sunybcs!gloria!colonel @
      Ucb-Vax.arpa
Subject: Re: The Turing Test - machines vs people
Article-I.D.: gloria.255

[This followup was actually written by a very clever computer program.]

As you say, the Turing test is a ←conversational← test.  Do you remember
Turing's original "conversation"?  "...Count me out on this.  I never
could write poetry."

The whole conversation is fatuous!  But then, it has no bonafide purpose.
It was merely set up by a scientist to prove something.  Nothing would
be easier, for that matter, than to program a computer to take part in
what Berne calls "8-stroke rituals":

        Hi.
        Hi.
        How are you?
        Fine.  How are you?
        Fine.  Nice day, isn't it?
        Yes.
        Well, goodbye.
        Goodbye.

But would you want to carry on such a conversation with a computer?
One converses socially only with conversers that one knows to be people.

Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

End of AIList Digest
********************

∂28-Jun-84  1428	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #82
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Jun 84  14:28:28 PDT
Date: Thu 28 Jun 1984 11:52-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #82
To: AIList@SRI-AI


AIList Digest            Friday, 29 Jun 1984       Volume 2 : Issue 82

Today's Topics:
  Humor & Business - How Not to Buy a Hero-1,
  Seminars - HAM-ANS Natural Language System,
    Expert System for Medical Consultation,
    Expert Systems at Hewlett-Packard,
  Conferences - Logic Programming Symposium,
    Workshop on Language Generation
----------------------------------------------------------------------

Date: 27 Jun 1984 08:23:02-EDT
From: kushnier@NADC
Subject: How Not to Buy a Hero-1

        From kushnier@NADC Tue Jun  5 08:44:53 1984
        Date: 5 Jun 1984 08:38:27-EDT
        From: kushnier@NADC
        To: SURINA@AFSC-HQ, kushnier@NADC.ARPA
        Subject: Re: Small Computer Procurements

        HOW NOT TO BUY A HERO 1
              By Ron Kushnier

        I am an Engineer, by trade.
        I have my Double-E.
        And When I saw the Hero-1,
        I knew it was for me.

        I gave the order to my boss
        And explained the application,
        He very quickly signed the thing
        With the wildest jubilation.

        The order went to Purchasing
        And was no sooner in the door,
        When panic struck
        the buyer screamed,
        "No one bought a Hero-1 before".

        The order came back down to me
        With a simple "DISAPPROVED"
        My dreams were smashed
        My hopes were dashed
        My plans had been removed.

        Now I could understand this
        If a Shoeshine Boy I'd be
        But I'm supposed to be
         An Engineer
        And work in R&D.

------------------------------

Date: 28 Jun 1984 08:15:48-EDT
From: kushnier@NADC
Subject: Another Toy


Another Toy
   By Ron Kushnier

Here's another toy
That my husband wants to buy.
I'll never know
What good it is
Or any reasons why.
But HERO-1, my fate is doomed
I see it very clear.
The age of
Personal Computer Pets
Is very nearly here.

------------------------------

Date: Tue 26 Jun 84 11:29:16-PDT
From: Emma Pease <EMMA@SU-CSLI.ARPA>
Subject: Seminar - HAM-ANS Natural Language System

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

The following will take place on Friday, June 29 in the Ventura
Conference room from 2:00 to 4:00 (followed by tea).

THE DIALOG SYSTEM HAM-ANS:  NATURAL LANGUAGE ACCESS TO DIVERSE APPLICATION
SYSTEMS (H. Marburger, K. Morik, B. Nebel) -- St

This talk will introduce the overall goals of the NL-System HAM-ANS (HAMburg
Application-oriented Natural language System) which is currently being
developed at the University of Hamburg.  HAM-ANS encompasses three different
application classes:  natural language access to a vision system (traffic at
a street crossing), to a relational database system (fishery data), and for
guiding a competitive dialog with a client (hotel reservation situation).
The system accepts typed input in colloquial German and produces typed German
responses.  The system's general architecture and knowledge sources will be
introduced.

USER MODELING, EVALUATION STANDARDS, AND DIALOGUE STRUCTURE -- THE HAM-ANS
APPROACH

(Katharina Morik) --

AI dialogue systems are now developing from question-answering systems toward
advising systems.  This includes:

        *       structuring dialog

        *       understanding and generating a wider range of speech acts than
                simply information request and answer

        *       modeling the user's familiarity with the system, his/her state
                of knowledge about the domain, and his/her evaluation
                standards (goals)

In this talk, first the field of user modeling is structured according to the
different aspects of the user (familiarity, knowledge, evaluation).

We may then, secondly, describe our ongoing work in this field and relate it
to other approaches.  User modeling in HAM-ANS is closely connected to dialog
structure and dialog strategy.  In advising the user, the system generates the
verbalizes speech acts.  The choice of the speech act is guided by the user
profile and the dialog strategy of the system.

------------------------------

Date: Tue 26 Jun 84 11:55:27-PDT
From: Ted Shortliffe <Shortliffe@SUMEX-AIM.ARPA>
Subject: Seminar - Expert System for Medical Consultation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

There will be a special seminar presented by Mario Fieschi from Marseilles
on Tuesday, July 10, from 2:30-3:30pm in the TC-135 conference room at the
medical school.  Mario has done some interesting work on medical expert
systems, and is spending a few months at MIT with Peter Szolovits (who was on
his thesis committee).  He will be visiting Stanford from July 9-11.

                   -------------------------------

        Speaker:     Mario Fieschi, MD, PhD
        Affiliation: University of Marseilles, France
        Title:       SPHINX: An Expert System for Medical Consultations
        Place:       Room TC-135, Medical School
        Time:        Tuesday, July 10, 2:30-3:30pm

        I will present an outline of the program SPHINX, designed for the
definition of medical knowledge and construction of a rule-based system,
currently being used in:

        . Therapeutic decisions : Application in diabetes
        . Diagnostic decisions : Application in jaundice
        . Tool for education : Application in jaundice

------------------------------

Date: Wed 27 Jun 84 09:46:18-PDT
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: Seminar - Expert Systems at Hewlett-Packard

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

                             SIGLUNCH

DATE:        Friday, June 29, 1984
LOCATION:    Chemistry Gazebo, between Organic & Physical Chemistry
TIME:        12:05
SPEAKER:     Steven Rosenberg
             Hewlett-Packard Research Laboratories
             Palo Alto

TOPIC:       Expert Systems at Hewlett-Packard

The Applications  Technology  Laboratory  of HP  Labs  is  engaged  in
developing "industrial strength"  AI. As part  of its contribution  to
this effort,  the Expert  Systems Department  has engaged  in  various
"experiments"  to  develop   expert  system   prototypes.   One   such
experiment involved  the development  of PICC,  an expert  system  for
diagnosing flaws in IC  wafers during negative photolithography.  This
talk will  discuss  the  development  and  status  of  PICC.   Besides
describing the technical aspects of PICC,  I will explore some of  the
issues involved  in conducting  expert  systems experiments:  why  was
photolithography chosen  as  a  good  area  to  apply  expert  systems
technology; what were the  pitfalls in moving  PICC from a  laboratory
environment into a real fab line; even if it works, is it useful?

------------------------------

Date: 26 Jun 84 14:28:00-PDT (Tue)
From: hplabs!hp-pcd!uoregon!conery @ Ucb-Vax.arpa
Subject: Logic Programming Symposium
Article-I.D.: uoregon.30100001

>From John Conery (conery@uoregon)


                            -- Announcing --

             1985 International Symposium on Logic Programming

        Tentatively scheduled for Boston, Massachusetts, June 1985

        Sponsored by IEEE Technical Committee on Computer Languages

The symposium will cover implementations and applications of logic programming
systems, including (but not limited to) parallel processing, expert systems,
natural language processing, systems programming, implementation techniques,
and performance issues.

Authors should send 8 copies of their papers (8-20 pages, double spaced) to

        John Conery
        Department of Computer and Information Science
        University of Oregon
        Eugene, OR   97403

Submission deadline is November 1, 1984.  A formal call for papers will be
issued shortly.  For more information, contact:

        Conference Chairman:    Doug DeGroot
                                IBM T.J. Watson Research Center
                                PO Box 281, Yorktown Hts. NY 10598

        Technical Co-Chairmen:  Jacques Cohen
                                Computer Science Dept - Ford Hall
                                Brandeis University
                                415 South St
                                Waltham MA  02254
                                CSNET:     jc@brandeis
                                ARPANET:   jc.brandeis@csnet-relay

                                John Conery
                                Department of Computer and Information Sci
                                University of Oregon
                                Eugene, OR  97403
                                CSNET:     conery@uoregon
                                ARPANET:   conery.uoregon@csnet-relay

------------------------------

Date: Thu 28 Jun 84 09:15:49-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Workshop on Language Generation

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]


                INTERNATIONAL WORKSHOP ON LANGUAGE GENERATION

Organizers - Doug Appelt and Ivan Sag
Staff - Emma Pease
Dates - July 8 - 11
Size - 30 invited + 30 local
Location - Stanford University
Sponsors - National Science Foundation, American Association for
           Artificial Intelligence, CSLI, Fujitsu Laboratories, Ltd.

The Second International Workshop on Language Generation will be held at
Stanford University from July 8-10, immediately following the COLING
conference.  The workshop, organized by Doug Appelt and Ivan Sag is designed
to allow researchers working in the field of language generation to share
recent research results and discuss matters of importance to the field.
Topics of discussion for this workshop include the design of grammatical
formalisms for language generation, the role of planning and speech act
theory in language generation, the production of extended discourse, the
foundations for a theory of language generation, modeling the hearer's
knowledge and intentions, and producing coherent explanations of reasoning
and decision-making.  Linguists as well as artificial intelligence
researchers will participate in the workshop.

The workshop is being sponsored by a grant from the National Science
Foundation, the American Association for Artificial Intelligence, and a gift
from Fujitsu Laboratories, Ltd.

ADDITIONAL INFORMATION: Conference starts at noon, July 8 in the
Elliott Program Center.  This is a workshop and so interested people
should check with Doug Appelt before going.

[...]

------------------------------

End of AIList Digest
********************

∂05-Jul-84  2304	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #83
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 Jul 84  23:04:08 PDT
Date: Mon  2 Jul 1984 22:16-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #83
To: AIList@SRI-AI


AIList Digest            Tuesday, 3 Jul 1984       Volume 2 : Issue 83

Today's Topics:
  Administrivia - Late Delivery,
  AI Tools - LISP in AZTEC C & Interlisp under UNIX,
  Games - Chess 4.5,
  AI Literature - AI Text Recommendations Wanted,
  AI in Process Control - References Wanted,
  Commonsense Reasoning - Importance of Context,
  Graphics - Three-Point Curve,
  AI and Business - Second Summary & Industry Newsletter & Survey,
  Expert Systems - New Products,
  Machine Translation - Industry News,
  Natural Language - UNIX Interface
----------------------------------------------------------------------

Date: Thu 5 Jun 84 21:00:00-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Late Delivery

This issue and the next two have been delayed by a combination of
operator error and poorly-designed software.  (I left a closing
quote out of an address in the distribution list; the mailer
then failed to expand the list and yet gave no indication of the
failure.)  My apologies for the delay.

This message will, however, provide an interesting challenge to the
individuals who are studying automated retrieval of information from
the AIList archive.  Will the digest date or the message date prevail?
Will the logical inconsistency cause computers across the country to
blow their fuses?  Will this self-referential message create a
wormhole in space-time?  Has the futility of temporal reasoning
finally been demonstrated?  Tune in next time ...

					-- Ken Laws

------------------------------

Date: Fri, 29 Jun 84 17:08:17 EDT
From: William K. Cadwallender (LCWSL) <wkc@Ardc.ARPA>
Subject: LISP IN AZTEC C

        I was at a seminar in applied AI recently, and someone there told me
about a LISP written in Aztec C under CP/M (... something like Z-LISP?) which
was allegedly in the public domain and available possibly from SIMTEL. Does
anyone out there know anything about this LISP, or any LISP that I could run
in any manner on a 6502 system?

                        William Cadwallender
                        (wkc@ARDC)

P.S. I am interested in the C SOURCE code for this thing.

------------------------------

Date: 28 Jun 84 1:07:33-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!whuxle!spuxll!abnjh!u1100a!pyuxn!py
      uxww!gamma!ulysses!burl!idi!kiessig @ Ucb-Vax.arpa
Subject: Interlisp under UNIX?
Article-I.D.: idi.207

        Can someone tell me if there is a version if Interlisp
that runs under UNIX?  If so, where/how does one go about
getting a copy?  Thanks,

Rick Kiessig
{decvax, ucbvax}!sun!idi!kiessig
{akgua, allegra, amd70, burl, cbosgd, dual, ihnp4}!idi!kiessig
Phone: 408-996-2399

------------------------------

Date: Sat 30 Jun 84 13:42:30-PDT
From: System-Assoc Dir <SRISW@SUMEX-AIM.ARPA>
Subject: Chess 4.5

I would like to inquire of the list if there is anyone who would like
to participate in converting a Pascal clone of the Northwestern University
Chess 4.5 program into TOPS-20 assembly language. The Pascal clone is
partially written and unrelated to the bug-filled version published in Byte
a few years ago. This version is based on a technical article by the
authors, Slate and Atkin. Replies to g.mclure@su-score.

------------------------------

Date: 3 Jul 84 22:10:20-EDT (Tue)
From: ihnp4!mgnetp!burl!idi!kiessig @ Ucb-Vax.arpa
Subject: Wanted: AI texts & references
Article-I.D.: idi.205

        I'm looking for some good AI references and/or texts.
Pointers to books that are considered a "must" are particularly
wanted, but even ones that you consider "good reading" would
be helpful, too.  Thanks,

Rick Kiessig
{decvax, ucbvax}!sun!idi!kiessig
{akgua, allegra, amd70, burl, cbosgd, dual, ihnp4}!idi!kiessig
Phone: 408-996-2399

------------------------------

Date: Thu, 28 Jun 84 18:50 EDT
From: JPAnderson@MIT-MULTICS.ARPA
Subject: AI in process control


I'm looking for information on AI applications in the process control
industry.  Any information on what is being done or what should be done
would be greatly appreciated.

Jay Anderson JPAnderson -at mit-multics

------------------------------

Date: 28 Jun 84 8:41:49-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!whuxle!spuxll!abnjh!u1100a!pyuxn!rlr
      @ Ucb-Vax.arpa
Subject: Re: a third of ten
Article-I.D.: pyuxn.794

> Please.   Everyone knows that 2*2=5 for sufficiently large values of 2.

Now hold on.  You're misquoting one of the great axioms of science,
Skillman's Axiom.
                 ←←←←
                V 3     =     7

        ... for very large values of 3.

Back in 1973 at Cornell was the first I had heard of this "axiom".  Has
anyone actually traced its roots back to its real origins?

[Skillman is now an astronomer somewhere in the northwest.]


WHAT IS YOUR NAME?                      Rich Rosen
WHAT IS YOUR NET ADDRESS?               pyuxn!rlr
WHAT IS THE CAPITAL OF ASSYRIA?         I don't know that ...  ARGHHHHHHHH!

------------------------------

Date: 9 Jun 84 10:55:00-PDT (Sat)
From: hplabs!hp-pcd!hpfcla!hpfclq!robert @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: hpfclq.68500004

I think a human or computer presented this problem may end up
asking for clarification provided more context (not supplied)
were not available.

For the problem given I would suspect perhaps
not an inconsistancy but rather that
I was being asked to setup a mapping from one "world" to another.
A similar problem is:

        If 32 is 0 and 212 is 100  what is 65?

This is of course a fahrenheit to celsius conversion problem.
Read out of context it sure sounds strange.  This kind of
mapping problem is very common and might be done correctly
by a computer without blinking a led.  :-)

                        -Robert (animal) Heckendorn
                        ..!hplabs!hpfcla!robert

[Readers might note that the recently posed problem of fitting the
"best" curve through three points was similarly underconstrained.
I have suppressed several responses of the form "The problem as
stated has infinitely many solutions; please clarify what you want."
Other responses have been a survey of applicable techniques and
a couple of replies suggesting particular techniques that are usually
appropriate.  -- KIL]

------------------------------

Date: 27 Jun 84 10:57:00-PDT (Wed)
From: hplabs!tektronix!orca!warner @ Ucb-Vax.arpa
Subject: three point curve
Article-I.D.: orca.915

I solved a similar problem once. My solution was without reguard to
"best fit" as it didn't pass through the "middle" point. Simple
mathematically, it was done with a recursive procedure that accepted
three points. It might be applied here if some method was devised for
extrapolating two more points from the original three points.

Brief description of the method:
Calculate the mid points of the lines between the "end" points and
the "middle" point, i.e. between 1,2 and 2,3. You now have five points.

                              (2).

                                   (4).

                 (5).                   (3).



    (1).

Pass the original procedure 1,5,4 then 5,4,3 .. repeat.

When the the distance between the points is "small enough" ..
connect them with a line.

If the original problem required a point through 1,3,X then 2 would
have been extrapolated, somehow, from 1,3,X.

The curves made this way look nice and smooth on a macro scale but
look wiggly on a micro scale.

Ken Warner
..tektronix!tekecs!warner

------------------------------

Date: Fri, 29 Jun 84 19:11:02 pdt
From: syming%B.CC@Berkeley
Subject: Second Summary of AI for Business


I have compiled a second summary of AI For Business which contains the
info I have received since the time I posted the first summary about one
month ago. If interested, please send me a mail. It is rather long, so I
don't want to waste the resource here.
                                        -- syming hwang

[The length is not excessive for AIList, but Syming is also interested
in compiling a list of people with particular interest in this subject.
I can help anyone who has difficulty constructing a net address for
him. -- KIL]

------------------------------

Date: Mon 2 Jul 84 11:04:18-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Applied Artificial Intelligence Reporter

I have received a mailing from the ICS (Intelligent Computer Systems)
Research Institute of the University of Miami.  They are hawking a
the Applied Artificial Intelligence Reporter, a monthly newsletter that
expanded to 12 pages in February (Vol. 1, No. 5).  The excerpts shown
in the ad look like fairly typical trade press material, with everything
having been passed through an editor or professional reporter to make
sure it's readable.  The publishers are promising a broad mix of
news, editorials, tutorials, reviews, etc.  The newsletter is available
for $49 ($39 for AAAI members) per year from the ICS Research Inst.,
U. of Miami, P.O. Box 1308-EP, Fort Lee, NJ 07024.

                                        -- Ken Laws

------------------------------

Date: Mon 2 Jul 84 21:44:04-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI and Industry

The July issue of IEEE Spectrum mentions (p. 69) a new two-volume
report, "Artificial Intelligence--A New Tool for Industry and Business,"
from Technical Insights, Inc., P.O. Box 1304, Fort Lee, N.J. 07024,
(201) 944-6204.  Volume I is said to contain explanations of
expert systems, natural-language processing, vision, touch sensing,
cognitive modeling, computer hardware, VLSI design, and applications,
as well as a market analysis and forecast for each technology.  Volume
II presents "hundreds of annotated programs currently under study at
universities," as well as lists of available research reports and
technical publications for each university program.  The two-volume
report costs $485 plus 6% in NJ or $27 overseas postage and handling.

                                        -- Ken Laws

------------------------------

Date: Mon 2 Jul 84 13:18:38-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert System Tools

Teknowledge (of Palo Alto) has released two software packages for
expert system builders.  M.1, at $12,500, is for those who want to
explore expert systems concepts on the IBM PC.  The price includes
a four-day training course for one person.  S.1 is for professional
knowledge engineers developing large-scale knowledge systems.  It
sells for $50,000, which includes a two-week training course for two
people, a sample system with detailed case history, and access to
Teknowledge's applications engineering services.  S.1 currently
runs on Xerox 1100 and 1108 workstations and is being ported to
VAX 11/750 and 11/780 under VMS.

                                        -- Ken Laws

------------------------------

Date: Mon 2 Jul 84 21:57:21-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Machine Translation

CNN News carried a story over the weekend about Bravice Int., which
claims to have the first commercial Japanese-to-English translation
system.  They claim about 80% accuracy for technical text, and charge
$100,000 for the program.  An English-to-Japanese translator is in
the works.

                                        -- Ken Laws

------------------------------

Date: Mon 2 Jul 84 22:06:28-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Natural-Language Interface

We had a query recently about natural-language interfaces to UNIX.
Anyone interested in this subject should read "Talking to UNIX in
English: An Overview of UC" in the June issue of Communications of
the ACM, pp. 574-593.  The article is by Robert Wilensky, Yigal Arens,
and David Chin.  It has considerably more detail than the articles
I mentioned previously.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂05-Jul-84  2203	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #84
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 Jul 84  21:55:37 PDT
Date: Tue  3 Jul 1984 13:40-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #84
To: AIList@SRI-AI


AIList Digest           Wednesday, 4 Jul 1984      Volume 2 : Issue 84

Today's Topics:
  Brain Theory - Memory,
  Poetry - Robots,
  Turing Test - Discussion,
  Law - Robot Rights,
  Cognitive Psychology - Mind and Brain,
  Seminar - Knowledge-Based Circuit Design
  ----------------------------------------------------------------------

Date: 27 Jun 84 13:38:00-PDT (Wed)
From: pur-ee!uiucdcs!convex!graham @ Ucb-Vax.arpa
Subject: Re: Objection to Crane: A Quick Question - (nf)
Article-I.D.: convex.45200002

"...    a person can be enabled (through
        hypnosis or by asking him the right way) to remember
        infinite details of any experience of this or prior life
        times ... "

> Memory recall under hypnosis has been found to be just as reconstructive
> (perhaps more so) as normal memory.  Hypnotic states buy you some recall,
> but not that much!

I have heard (but have no reference document to cite) that neuro-surgeons
have demonstrated that stimulation (i.e, contact with) certain parts of the
brain can produce complete recall of all sensory input from a past event,
even of details not originally "noticed".  There is apparently a complete
record of sensory input stored which some mechanism filters, so that we are
"aware" of only some of it.  Can anyone corroborate this, and cite a reference?

Marv Graham; ConVex Computer Corp. {allegra,ihnp4,uiucdcs,ctvax}!convex!graham

------------------------------

Date: 03 Jul 84  17:06:08 bst
From: "J.R. COWIE%rco"@ucl-cs.arpa

Use of if in natural language:

The following is a brief description of the project proposal by one
of our students on a M.Sc. course in Information Technology.
This student is originally a philosopher by profession, but has
decided to move over into Computer Science. He is interested in
using prolog to test out his ideas.

If you have any suggestions or references send them to me and I will pass them
on to him.  (j.r.cowie%rco@ucl-cs.arpa)
            ---------------------------------------------

    It is arguable that contraposition is not a universally
valid principle of inference for empirical conditionals and
yet we use it, apparently successfully, all the time. An
obvious suggestion is that we are discriminating and select
a subclass of cases to contrapose. We then ask what
characterizes that subclass.
    The approach to be adopted attempts to isolate several
components of a conditional 1) a truth-functional component
2) an inferential component 3) an explanatory component. An
attempt is to be made to explain features of the logic of
conditionals in terms of the relations between these
components and in particular the relation between the
explanatory direction of a conditional (antecedent-to-
consequent or consequent-to-antecedent) and the inferential
direction.
    In the philosophical literature questions about the
validity of contraposition are generally associated with
questions about the validity of (the invalid principles)
"strengthening the antecedent" (i.e. the logic of "if" in
English is not monotonic) and transitivity. And all these
questions are generally asked under the headings "Subjunctive
Conditionals","Counterfactuals" or "Contrary-to-fact
Conditionals". It may well be appropriate to cover these
topics to some extent.
    Since what is envisaged is of the nature of an empirical
hypothesis concerning the logic of natural language
statements, and that hypothesis will take the form of a set
of principles of natural inference, it is expected that it
will be desirable to construct a (PROLOG) inference machine
employing these principles for test purposes. It has not been
decided how the machine should work or how it should be
employed.
    I am not acquainted with the psychological literature or
artificial intelligence literature on these topics and would be
grateful for any references.

Ian Wilson.

------------------------------

Date: 29 Jun 1984 08:00:20-EDT
From: kushnier@NADC
Subject: The Law


The Law
   By Ron Kushnier

The Robotic Laws of Asimov
Have always been in Fiction,
But now through High Technology
They've lost that last restriction.
So the Robots are becoming real
They are our new found tools.
Although they may be getting smart,
They must still obey the Rules.

------------------------------

Date: 2 Jul 1984 08:33:47-EDT
From: kushnier@NADC
Subject: The Last Laugh


The Last Laugh
    By Ron Kushnier


Some people laughed
When they heard me say,
"We need a robot right away".
And I must admit
I had to smile
When I thought about it
For awhile.
This funny little box of steel
Running about on one big wheel
Raising its arm
So it can say,
"excuse me folks,
But you're in my way".

------------------------------

Date: 3 Jul 1984 09:10:53-EDT
From: kushnier@NADC
Subject: The Robot Boom


The Robot Boom
    By Ron Kushnier

The Robot Boom
Will be here soon-
Bigger than Home Computers.
The parents will pay,
But the kids will play
And be their main recruiters.

For there is no fear
In our children, dear
Of androids or machines.
Kids feel quite at ease,
Think it's a breeze
To proces all our dreams.

------------------------------

Date: 27 Jun 84 17:36:05-PDT (Wed)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.ags @ Ucb-Vax.arpa
Subject: Re: The Turing Test - machines vs people
Article-I.D.: pucc-i.331

>  [This followup was actually written by a very clever computer program.]
>
>  As you say, the Turing test is a ←conversational← test.  Do you remember
>  Turing's original "conversation"?  "...Count me out on this.  I never
>  could write poetry."
   [...]
>  The whole conversation is fatuous!  But then, it has no bonafide purpose.
>  It was merely set up by a scientist to prove something.
>
>  But would you want to carry on such a conversation with a computer?
>  One converses socially only with conversers that one knows to be people.

Your bug-killer line turns out to have more apparent truth in it than the
rest of the article.  It's too bad you didn't read the original conversation
which you quoted from.  I am giving you the benefit of the doubt here by
assuming that you did not deliberately misrepresent the conversation (and that
you were not unable to understand it):

        Q: Please write me a sonnet on the subject of the Forth Bridge.
        A: Count me out on this one.  I never could write poetry.
        Q: Add 34957 to 70764.
        A: (Pause about 30 seconds and then give as answer) 105621.
        Q: Do you play chess?
        A: Yes.
        Q: I have K at my K1, and no other pieces.  You have only K at K6
           and R at R1.  It is your move.  What do you play?
        A: (After a pause of 15 seconds) R-R8 mate.

The point of the first answer is that no human is an expert on everything,
and that a program which hopes to pass the Turing test had best not give
itself away by being overly knowledgeable.

Did you notice that the answer to the second question is incorrect?  It
should be 105721.  [Aha! a sexist machine!  It assumes that women are no
good with figures.  Oops--I forgot.  Since you haven't read Turing's
"Can a Machine Think?" you won't understand what women have to do with
this discussion.  Oh, well...]


Dave Seaman                     "My hovercraft is full of eels."
..!pur-ee!pucc-i:ags

------------------------------

Date: 29 Jun 84 17:22:46 EDT
From: kyle.wbst@XEROX.ARPA
Subject: Robot Rights

re: Why distinguish humans from machines...

For the same reason the Supreme Court in the last century got into the
business of deciding what fraction of a human being a slave was for
political purposes. Pandora's box will be opened again on this issue in
the future if and when we succeed in creating AI devices that pass
various tests. I don't care if the devices are made of silicon, biomass
(shades of genetic engineering), or some hybrid combo. The point is, I
can see some group organizing them into either a union, a voting block,
or a public interest group to keep another ton of lawyers living off the
fat of the land for years to come.

------------------------------

Date: 26 Jun 84 7:42:05-PDT (Tue)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!unc!ulysses!gamma!pyuxww!pyuxn!rlr
      @ Ucb-Vax.arpa
Subject: Re: Mind and Brain
Article-I.D.: pyuxn.784

> "subconsious", "mind", etc -- what DO these words mean?  More
> importantly, do these things exist?
> I assert they do not.  I take the behaviorist philosophy that what
> you call "mind" is a thing invented by Plato or some dead Greek
> person which is just as mystical and unreal as "the Gods" or
> "magic."
> What you have is a brain.  What you do is behavior.  You are an
> organism that responds to AND IS CHANGED BY your environment.
> That's all.  The rest you've made up or assumed was true because
> some dead greek person said it was there.
> Show me your "mind" -- demonstrate its existence.  I dare you.

BRA-VO!!!!!!!


It doesn't matter what you wear, just as long as you are there.
                                                Rich Rosen    pyuxn!rlr

------------------------------

Date: 27 Jun 84 14:38:01-PDT (Wed)
From: hplabs!sdcrdcf!sdcsvax!akgua!whuxle!spuxll!ech @ Ucb-Vax.arpa
Subject: Re: Human Models
Article-I.D.: spuxll.514

Jules Greenwall's suggestion is an extreme example of what researchers in the
area refer to as a "meat machine."  Traditionally, such experiments contain
a neuron model and attempt to simulate a brain at THAT level of detail.

His suggestion suffers from a similar problem, also: assuming that one
has a complete quantum-mechanical model of a human brain, how is one to model
the behavior of molecules, in real time, with a computer made of molecules?
I thank him for the suggestion, of course, because it drives home an
important point: you simply can't build a real-time emulation of a brain
by modelling it at the quantum-mechanical level; you MUST use some
"higher level" model.

Note that, except for rather simple neuron nets, traditional meat machines
are also many orders of magnitude removed from a real-time simulation of
a brain of human-class complexity.

Finally, I will note that we are on the verge of opening yet another round
of the reductionist/wholist debate; yet again, I will recommend that you
go devour a copy of "The Mind's I".

=Ned=

------------------------------

Date: Mon 2 Jul 84 13:51:39-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral - Knowledge-Based Circuit Design

            [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                        KNOWLEDGE-BASED CIRCUIT DESIGN


                               Christopher Tong
                          Computer Science Department
                              Stanford University


                             Dissertation defense
                       2:30 p.m., Tuesday, July 17, 1984
                            Margaret Jacks Hall 146


DESIGN AS DIALECTIC. Design is a dialectic between the designer and what is
possible. As design of an artifact, circuit design involves creating artifact
descriptions that satisfy the requirements imposed by designer, environment,
domain, logic, and limited experience; good design exploits these requirements
of the design problem by converting them into constraints on the design
process. As design of a functionally decomposable artifact, circuit design
entails recursive partitioning of functional requirements in such a way that
the partitioned requirements map onto technologically available structures that
satisfy them.  Finally, viewing circuit design as the design of a physical
computational system, we can categorize the required functionality along a
small number of functional dimensions (e.g. control, communication, behavior).

        This thesis makes several contributions. It introduces the notion of a
playful design process as an ideal toward which the engineering of design
knowledge should be steered; it describes the extent to which the "playful
design" ideal can be realized by a circuit design process. It extends the
notion of play to playful control of the design process; and finally, it
presents an ontology of dimensions for categorizing and relating design
requirements and approaches.

A PLAYFUL DESIGN PROCESS. Play is doing what one wants to do when wants to do
it. Playful design is possible to the degree that: refinement steps can be
carried out in an order-insensitive manner; and decomposition creates
context-insensitive components. We show that the benefits derived from enabling
such play in the process of design include: enablement of goal-directed
refinement, and an exponential reduction in number of solutions considered over
a more traditional "fixed phases" approach to circuit design. By characterizing
circuit specifications by the ubiquitous functional dimensions of control,
communication, and behavior, we enable a measure of order-insensitive
refinement; these functional dimensions induce a set of evaluation dimensions
for performing goal-directed refinement. Viewing components as processors
facilitates context-insensitive decomposition.

PLAYFUL CONTROL OF THE DESIGN PROCESS. Playful control entails being able to
resolve current design problems by pursuing strategies that are appropriate
given the resource limitations of the designer. Playful control is possible to
the extent that: the problems produced by the design process are
well-categorized; and problem posting and resolution can be separated. Playful
control is knowledge-intensive, drawing on a library of strategies indexed by
problem type and resource allocation.

        We describe an interactive computer program called DONTE (Design
ONTology Experiment). DONTE has served to implement, motivate, and help debug
the contributions made by this research.

------------------------------

End of AIList Digest
********************

∂06-Jul-84  1220	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #85
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Jul 84  12:19:21 PDT
Date: Wed  4 Jul 1984 11:06-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #85
To: AIList@SRI-AI


AIList Digest           Wednesday, 4 Jul 1984      Volume 2 : Issue 85

Today's Topics:
  Education - Request,
  AI Tools - Interlisp under UNIX & Common Lisp,
  Numerical Analysis - Best Fitting Curve,
  Games - War Games,
  Humor - Mongooses & Man, Bytes, Dog,
  Seminar - Analogy in Legal Reasoning
----------------------------------------------------------------------

Date: Tue, 3 Jul 84 14:39:45 pdt
From: cjet%ucbamber.CC@Berkeley
Subject: Request for Participation in Education

                              ANNOUNCING
                              ----------


        The opening in September 1984 of a Model School which will  develop
new ways to use computers in education for use throughout the Berkeley Uni-
fied School District, the state, and the nation.  The District seeks colla-
boration  with  persons,  firms,  research organizations, universities, and
others interested in the leading edge of technology in the schools.


                               FEATURING
                               ---------

   *    Computer workstations on local area networks
   *    Many workstations per classroom
   *    Computers used to teach regular curriculum
   *    Computers used for classroom and school administration
   *    Total Integration, including persons with physical and mental
        disabilities in the classroom
   *    Collaboration by prominent members of  business  and  faculty  from
        the  University  of California at Berkeley toward curriculum design
        and technology integration
   *    A site for research, development  and  demonstration  of  effective
        use of educational technology


                        REQUEST FOR INFORMATION
                        -----------------------

We desire information about advanced  hardware  or  software  systems  that
could  be  acquired  for use in the Model School. In addition to computers,
courseware, and networks, the District is interested  in  peripherals  that
address  the needs of younger children and children with disabilities, such
as special keyboards, graphic displays, voice synthesisers, etc.

The Berkeley Unified School District is investing substantial funds in  the
school,  staff  and technology.  We seek collaboration, sponsorship, assis-
tance and state-of-the-art products. People of many disciplines, skills and
viewpoints  are  working together to make major advances.  We invite you to
explore fuller involvement and/or participation in any of the major aspects
of this exciting project.  Please contact us at cjet@amber@berkeley or call
(415) 527-9030.
                                Eric Novikoff, C-JET

This flyer is being sent by the Center for Jobs Education
and Technology, the non-profit corporation which is the technology
consultant for the Berkeley School District's Model School Project.

------------------------------

Date: 29 Jun 84 12:51:23-PDT (Fri)
From: hplabs!sdcrdcf!sdcsvax!noscvax!goodhart @ Ucb-Vax.arpa
Subject: Re: Interlisp under UNIX?
Article-I.D.: noscvax.532

Information Sciences Institute (ISI) provides INTERLISP for VAX computers
running either the UNIX or VMS operating systems.  For further information
call ISI at (213) 822-1511.

------------------------------

Date: 2 Jul 1984 1322-EDT
From: WHOLEY at CMU-CS-C.ARPA
Subject: Common Lisp

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

First of all, there's a CLISP BBoard (C for Common) that things like this
should probably be discussed on.  Since a number of questions were asked in
this forum [CMU bboard], I'll answer them in this forum.

1. DEC is supposedly either doing or planning to do ("real soon now") a port of
DEC Common Lisp to Unix.

2. I'd be wary of any "compatibility" package for Common Lisp in Franz.  There
are a number of complicated Common Lisp features that are somewhat difficult to
implement from the ground up, and I doubt that a "compatability" package can
accurately capture enough to make large Common Lisp programs run.  Such
features include (but are not limited to):
        The package system, which provides one with separate namespaces,
        Lexical scoping of variables (upward and downward "funargs"),
        Multiple value returns from functions,
        Arrays with fill pointers, adjustable arrays, and displaced arrays.

3. Golden Common Lisp from Gold Hill Computers is a subset of Common Lisp for
the IBM PC.  It is intended more as a teaching tool than a full Common Lisp
programming environment, although one could certainly write useful programs in
it (much as one can write useful programs in BASIC).  It is certainly the
finest microcomputer Lisp around.

------------------------------

Date: 28 Jun 84 8:24:45-PDT (Thu)
From: hplabs!hao!seismo!brl-tgr!brl-vgr!gwyn @ Ucb-Vax.arpa
Subject: Re: Best fitting curve - 3 points
Article-I.D.: brl-vgr.418

Usually the correct approach is to take the parameterized curve that
is expected by theory to pass through the data and do a weighted (by
inverse error squared) least squares fit (i.e. determine the values
of the parameters that minimizes the weighted sum of the squares of
the deviations of the known data points from the curve).  One method
that works well is the Marquardt gradient-expansion technique described
in Bevington's "Data Reduction and Error Analysis for the Physical
Sciences".  Of course this assumes that you HAVE a theory...

------------------------------

Date: Tue, 3 Jul 84 22:44:12 EDT
From: Michael←D'Alessandro%Wayne-MTS%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: War Games

Although  this  is  a  late  response to Chuck McManis' request for
information on wargames, I thought I'd pass this along:

There are many microcomputer wargames available today. The majority
of  them  are  produced  by Strategic Simulations Inc. (SSI). SSI's
games are very similar to Avalon Hill's  games,  and  cover  topics
such as The Civil War, WWII (D-Day, North Africa, Sink the Bismark,
Battle of Britian, etc), and modern day  hypothetical  combat.  All
these  games  are realistic (they use accurate orders of battle for
both sides) and quite playable. These games can be  played  by  two
people,  or you can play against the computer. Unfortunately, while
playing these games may help you get  a  "feel"  for  computer  war
games,  they  won't  help you write one since you can't look at the
programs - they are locked up. SSI's games are available for almost
all  microcomputers,  with  the  selection  for the Apple II family
having the most games. Go to a local computer store to see them.

One wargame that stands out from all others is "Eastern  Front"  by
Chris  Crawford  for  the  Atari 400/600/800. In addition to buying
Eastern Front, you can buy a documented 6502 source code listing of
the  program,  along with a booklet that Chris wrote describing how
he implemented the program, and the  various  combat  and  movement
routines he used. The program also has a
little rudimentary "AI" in it - when you  play  against
the   computer  the  computer  is  quite  a  formidable
opponent, and Chris  describes  his  "AI"  routines  in
detail.  Chris is one of the premier computer war games
designers in the country. To see this, you might  check
a  local  computer  store, or a local Atari User Group.
Eastern Front was originally available via  the  "Atari
Program  Exchange"  run  by  Atari, but since Atari has
just been sold  to  Jack  Tramiel,  the  Atari  Program
Exchange may no longer exist.

   Michael←D'Alessandro%Wayne.MTS%Umich.MTS.Mailnet@MIT-Multics.ARPA

------------------------------

Date: 04 Jul 84  0027 PDT
From: Don Woods <DON@SU-AI.ARPA>
Subject: re: mongice

[Forwarded from the Stanford bboard by Laws@SRI-AI.  This is the tail
end of a discussion about the plural of mongoose (mongooses).]

[...]

I'm also reminded of Walt "Pogo" Kelly's observation that "the mongoose is a
singular beast because nobody can pronounce two of them."

------------------------------

Date: 02 Jul 84  1532 PDT
From: Frank Yellin <FY@SU-AI.ARPA>
Subject: From the New Yorker:  Man, Bytes, Dog  :-)

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

From the New Yorker
July 2, 1984


MAN, BYTES, DOG

Many people have asked me about the Cairn Terrier.  How about memory, they want
to know.  Is it IBM-compatible?  Why didn't I get the IBM itself, or a Kaypro,
Compaq, or Macintosh?  I think the best way to answer these questions is to
look at th Macintosh and the Cairn head on.  I almost did buy the Macintosh.
It has terrific graphics, good word-processing capabilities, and the mouse.
But in the end I decided on the Cairn, and I think I made the right decision.

Let's start out with the basics:

Macintosh:
    Weight (without printer): 20lbs.
    Memory (RAM): 128K
    Price (with printer): $3,090

Cairn Terrier:
    Weight (without printer): 14lbs.
    Memory (RAM): Some
    Price (without printer): $250

Just on the basis of price and weight, the choice is obvious.  Another plus is
that the Cairn Terrier comes in one unit.  No printer is necessary, or useful.
And--this was a big attraction to me--there is no user's manual.

Here are some of the other qualities I found put the Cairn out ahead of the
Macintosh:

PORTABILITY:  To give you a better idea of size, Toto in "The Wizard of Oz" was
a Cairn Terrier.  So you can see that if the young Judy Garland was able to
carry Toto around in that little picnic basket, you will have no trouble at all
moving your Cairn from place to place.  For short trips it will move under its
own power.  The Macintosh will not.

RELIABILITY:  In five to ten years, I am sure, the Macintosh will be superseded
by a new model, like the Delicious or the Granny Smith.  The Cairn Terrier, on
the other hand, has held its share of the market with only minor modifications
for hundreds of years.  In the short term, Cairns seldom require servicing,
apart from shots and the odd worming, and most function without interruption
during electric storms.

COMPATIBILITY:  Cairn Terriers get along with everyone.  And for communications
with any other dog, of any breed, within a radius of three miles, no additional
software is necessary.  All dogs share a common operating system.

SOFTWARE:  The Cairn will run three standard programs, SIT, COME, and NO, and
whatever else you create.  It is true that, being a microcanine, the Cairn is
limited here, but it does load the programs simultaneously.  No disk drives.
No tapes.

Admittedly, these are peripheral advantages.  The real comparison has to be on
the basis of capabilities.  What can the Macintosh and the Cairn do?  Let's
start on the Macintosh's turf--income-tax preparation, recipe storage,
graphics, and astrophysics problems:

    -------------------------------------------------------------
    |              Taxes    Recipes    Graphics    Astrophysics |
    | Macintosh     yes       yes         yes           yes     |
    | Cairn         no        no          no            no      |
    -------------------------------------------------------------

At first glance it looks bad for the Cairn.  But it's important to look beneath
the surface with this kind of chart.  If you yourself are leaning toward the
Macintosh, ask yourself these questions:  Do you want to do your own income
taxes?  Do you want to type all your recipes into a computer?  In your graph,
what would you put on the $x$ axis?  The $y$ axis?  Do you have any
astrophysics problems you want solved?

Then consider the Cairn's specialties:  playing fetch and tug-of-war, licking
your face, and chasing foxes out of rock cairns (eponymously).  Note that no
software is necessary.  All these functions are part of the operating system.

         ----------------------------------------------------
         |            Fetch    Tug-of-War    Face     Foxes |
         | Cairn        yes        yes        yes      yes  |
         | Macintosh    no          no         no       no  |
         ----------------------------------------------------

Another point to keep in mind is that computers, even the Macintosh, only do
what you tell them to do.  Cairns perform their functions all on their own.
Here are some of the additional capabilities that I discovered once I got
the Cairn home and house-broken:

WORD PROCESSING:  Remarkably, the Cairn seems to understand every word I say.
He has a nice way of pricking up his ears at words like "out" and "ball."  He
also has highly tuned voice-recognition.

EDUCATION:  The Cairn provides children with hands-on experience at an early
age, contribution to social interaction, crawling ability, and language skills.
At age one, my daughter could say "Sit," "Come," and "No."

CLEANING:  This function was a pleasant surprise.  But of course cleaning up
around the cave is one of the reasons dogs were developed in the first place.
Users with young (below age two) children will still find this function useful.
The Cairn Terrier cleans the floor, spoons, bib, and baby, and has the unerring
ability to distinguish strained peas from ears, nose, and fingers.

PSYCHOTHERAPY:  Hear the Cairn really shines.  And remember, therapy is
something that computers have tried.  There is a program that makes the
computer ask you questions when you tell it your problems.  You say "I'm afraid
of foxes."  The computer says, "You're afraid of foxes?"

The Cairn won't give you that kind of echo.  Like Freudian analysts, Cairns are
mercifully silent; unlike Freudians, they are infinitely sympathetic.  I've
found that the Cairn will share, in a nonjudgmental fashion, disappointments,
joys, and frustrations.  And you don't have to know BASIC.

This last capability is related to the Cairn's strongest point, which was the
final deciding factor in my decision against the Macintosh--user-friendliness.
On this criterion, there is simply no comparison.  The Cairn Terrier is the
essence of user-friendliness.  It has fur, it doesn't flicker when you look at
it, and it wags its tail.

  -- James Gorman

------------------------------

Date: 2 Jul 84 20:04:53 EDT
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Ph.D. Oral - Analogy in Legal Reasoning

             [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                    A Ph.D. Oral Examination - Proposal Defense

          Title:    Analogy with Purpose in Legal Reasoning from Precedents
          Speaker:  Smadar Kedar-Cabelli
          Date:     Friday, July 6, 1984, 10:00 - 11:00 am
          Location: Hill Center, room 423

                    Open to DCS Faculty and Students

       One  open  problem in current artificial intelligence (AI) models of
    learning and reasoning by analogy is: which aspects  of  the  analogous
    situations  are  relevant to the analogy, and which are irrelevant?  It
    is currently recognized that analogy involves mapping  some  underlying
    causal  network  of relations between situations [Winston 82], [Gentner
    83], [Burstein 83], [Carbonell 83].  However, most  current  models  of
    analogy  provide  the  system  with  exactly  the  relevant  relations,
    tailor-made to each analogy to be performed.  As AI systems become more
    complex,  we  will  have  to  provide  them  with  the  capability   of
    automatically  focusing  on  the  relevant  aspects  of situations when
    reasoning analogically.  These will have to be sifted  from  the  large
    amount of information used to represent complex, real-world situations.

       In  order  to  study  these  general  issues,  we  are  examining  a
    particular case study of learning and  reasoning  by  analogy:  forming
    legal  concepts  by  legal  reasoning from precedents.  This is studied
    within the TAXMAN II project, which is  investigating  legal  reasoning
    using AI techniques [McCarty 82], [Nagel 83].

       In  this  talk, we will discuss the problem and a proposed solution.
    We examine legal  reasoning  from  precedents  within  the  context  of
    current  AI  models  of  analogy.    We then add a focusing capability.
    Current work on goal-directed learning [Mitchell 83a], [Mitchell  83b],
    and   explanation-based   learning [Dejong   83]   applies   here:  the
    explanation of how the precedent satisfies the intent of the law  (i.e.
    its  goals,  or purposes) helps to automatically focus the reasoning on
    what is relevant.

       Intuitively, suppose a lawyer wishes to argue that a particular case
    involving a bicycle violated  the  following  statute:  'a  vehicle  is
    forbidden  in a public park' [Hart 58].  He might argue by analogy to a
    clear precedent--a passenger car.  He needs to establish that a bicycle
    is a vehicle for the purposes of this statute, that bicycles should  be
    banned from the park for the same reasons that passenger cars are.  The
    purpose,  or  intent  of the law is to prohibit those things that would
    interfere with the serene, quiet setting of the park, or would  destroy
    the  natural  habitat,  and so on.  Reasoning from this, the lawyer can
    determine that aspects of the cases such as the ability to trample over
    lawns,  run  over  small  animals,  make  noise,  are relevant for this
    purpose.  On the other hand, aspects of the cases involving the country
    where the vehicles were manufactured, or the materials the vehicles are
    made of, are irrelevant for this purpose.  Given a  different  law  and
    purpose, these might well be relevant.

------------------------------

End of AIList Digest
********************

∂07-Jul-84  1252	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #86
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Jul 84  12:51:02 PDT
Date: Sat  7 Jul 1984 11:33-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #86
To: AIList@SRI-AI


AIList Digest            Saturday, 7 Jul 1984      Volume 2 : Issue 86

Today's Topics:
  Societies - New York SIGART,
  AI Tools - YAPS & LISPs,
  Mathematics - Curve Fitting,
  Brain Theory - Direct Stimulation & Hypnosis,
  Turing Test - Discussion,
  Games - Chess,
  Reviews - Robotics Industry Directory,
  Robotics - Poetry
----------------------------------------------------------------------

Date: 5 Jul 84 12:28:38-PDT (Thu)
From: ihnp4!whuxle!otto @ Ucb-Vax.arpa
Subject: SIGART mailing list
Article-I.D.: whuxle.511

This notice is intended to those interested in AI who live and work in the
greater New York City area.  My apologies to those who read this who are not
included in the above group.  The distribution mechanisms of netnews are not
precise enough to allow me to target my message better.



                            **************
                            **  NOTICE  **
                            **************

The New York City chapter of SIGART (the ACM Special Interest Group in
Artificial Intelligence) is interested in adding names to its mailing list.
This chapter holds monthly meetings in Manhattan to discuss various aspects
of AI, and often has invited speakers to present new ideas.

If you would like to receive the monthly meeting notices, please send the
following to me via electronic or paper mail:

        Your
                name
                US Mail mailing address
                company
                telephone number
                list of topics for the chapter to focus on

Please send this information to me at one of the following addresses:

        USENET: {ihnp4!}whuxld!otto
        CSNET:  otto.whuxle.btl
        MAIL:   George Otto, 1C-329A
                AT&T Bell Laboratories
                Whippany Road
                Whippany, NJ  07981

By working together to make this organization responsive to our interests,
it can become a valuable addition to our professional lives.

                                        George Otto
                                        AI Systems Dept
                                        AT&T Bell Labs, Whippany

------------------------------

Date: 5 Jul 1984 12:24:19-PDT
From: doshi%umn-cs.csnet@csnet-relay.arpa
Subject: Info. on >YAPS< production system.

Subject: Info. about the   YAPS   production system.
                           ----   -----------------

I am looking for information about the YAPS. In particular,

        (1) Can anyone please send me some short
            (1-30 pages) coded examples about the actual usage.

            And documentation, if possible.

        (2) Does there exist a YAPS  "primer" or any such
            thing, which gives some good examples.

        (3) Any other information (names, csnet addresses
            etc.).

We have UNIX 4.1 & 4.2 .

If you have to send by surface mail, please send to :

        Rajkumar Doshi
        Computer Science Department
        University of Minnesota
        136 Lind Hall,
        207 Church Street, S.E.
        Minneapolis,  MN  55455

I will gladly re-pay the US postage promptly.
Thank you very much.

  -Raj Doshi

------------------------------

Date: 6 Jul 1984 08:25:26-EDT
From: bac@Mitre-Bedford
Subject: Small Computer Lisps?


   While everyone's on the topic of Lisps for various systems,
does anyone know of any decent Lisp implementations for CP/M
or MS-DOS (Z-80 or 8088 based systems)?  All I know of is a
version called MULisp, from Microsoft, but I have no idea
whether it's useful, efficient, etc.

   Is AI and Lisp going to remain tied to mainframe machines,
or will it ever reach the growing population of microcomputers?


                                        Brant Cheikes
                                        bac @ Mitre-Bedford

------------------------------

Date: Fri 6 Jul 84 11:52:00-MDT
From: purush <purushothaman@UTAH-20.ARPA>
Subject: Re: Interlisp on Unix? -- partial answer

Interlisp on vax running Unix can be obtained by sending mail to
Interlisp@isib.arpa or by writing to

USC Information Sciences Institute
Interlisp-VAX Project
4676 Admirality Way
Marina del Rey, CA 90291.

A report of this effort is in the 1982 Lisp and Functional Programming
conference proceedings.

  -purush

------------------------------

Date: Fri 6 Jul 84 10:32:28-PDT
From: BARNARD@SRI-AI.ARPA
Subject: best-fitting curve for 3 points

Maybe I'm missing something.  Three points define a unique circle.
Finding the circle given the points is trivial.  What's the problem?

In general, the theory of splines deals with the problem of fitting
a piecewise polynomial to a sequence of points.  For example,
b-splines are piecewise cubics that can be used to connect points
with smooth, continuous curves (i.e., twice-differentiable curves).

------------------------------

Date: 2 Jul 84 9:11:00-PDT (Mon)
From: pur-ee!uiucdcs!ctvax!jmiller @ Ucb-Vax.arpa
Subject: Direct Brain Stimulation
Article-I.D.: ctvax.45200003

As noted by others, we're talking about experiments by Penfield here.
Pretty much any intro psychology book should be able to point you in the
right direction, but be careful about taking them too seriously.
Followup experiments by others did not always replicate Penfields
findings, and these often failed in problematic ways -- people reported
hearing both sides of a telephone conversation, or doing things or being
places that could be disconfirmed in independent ways.  The effects that
could most reliably be replicated were those that suggesting that
sensory pathways were getting activated by the stimulation: reports of
pure tones or flashes of monochrome light were very common.  Penfield's
work was certanly interesting, but the current attitude is that there
was a little less there than first appeared.

Jim Miller
Computer Thought, Dallas

------------------------------

Date: 7 Jul 84 1227 EDT (Saturday)
From: Alex.Rudnicky@CMU-CS-A.ARPA
Subject: Human Memory

Hypnosis does not enhance memory for past events.  There is no proof
that it does.  There never was.  In all likelyhood there never will be.
You may find documentation for this assertion in the work of several
investigators (in particular, Martin Orne).  For a review of the
literature try:

M. C. Smith Hypnotic memory enhancement of witnesses: Does it work?
        Psychological Bulletin, 1983, 94(3), 387-407.

I quote from the abstract:

        "In contrast to the myriad of anecdotal reports extolling the
        virtues of hypnosis for this purpose [witness memory],
        controlled laboratory studies have consistently failed to
        demonstrate any hypnotic memory improvement."


Electrical stimulation of the brain was studied by Wilder Penfield, 30
to 40 years ago at the Montreal Neurological Institute.  Penfield did
experiments with stimulation in the course of operations for epilepsy.
His work is described in most textbooks.  You might enjoy reading
some of the protocols he collected in his book "The excitable cortex
in conscious man" (1958).  In summarizing his findings, Penfield uses
the words "illusions" and "hallucinations" to describe his patients'
recollections.

Now I have a question:
It may be fun to speculate about the super-normal and the para-normal,
but what does it have to do with AI?

------------------------------

Date: 1 Jul 84 20:26:59-PDT (Sun)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!philabs!cmcl2!seismo!rochester
      !rocksvax!sunybcs!gloria!colonel @ Ucb-Vax.arpa
Subject: Re: The Turing Test - machines vs. people
Article-I.D.: gloria.290

For those of you who missed the start of this colloquy, here's the text
of Turing's original hypothetical conversation:

        Q: Please write me a sonnet on the subject of the Forth Bridge.
        A: Count me out on this one.  I never could write poetry.
        Q: Add 34957 to 70764.
        A: (Pause about 30 seconds and then give as answer) 105621.
        Q: Do you play chess?
        A: Yes.
        Q: I have K at my K1, and no other pieces.  You have only K at K6
           and R at R1.  It is your move.  What do you play?
        A: (After a pause of 15 seconds) R-R8 mate.

>>      The point of the first answer is that no human is an expert on
>>      everything, and that a program which hopes to pass the Turing
>>      test had best not give itself away by being overly
>>      knowledgeable.

This strains my credulity.  Is it coincidence that the computer declines
to write a sonnet and accepts the other challenges?  A real human, trying
to prove that he is not a computer program, would probably welcome the
opportunity to offer a poem.

And did Turing believe that one can be an "expert" poet in the same way
that one can be an expert arithmetician or chess-player?  I hope not!

>>      Did you notice that the answer to the second question is
>>      incorrect?  It should be 105721.  [Aha! a sexist machine!  It
>>      assumes that women are no good with figures.  Oops--I forgot.
>>      Since you haven't read Turing's "Can a Machine Think?" you
>>      won't understand what women have to do with this discussion.
>>      Oh, well...]

This is unworthy of its author.  Of course I read the article.  My attack
was not against the details of the conversation (for that matter, the
third problem is ambiguous), but the premise of the Test.  You may
remember that Turing called it a "Game" rather than a "Test."  This
sort of situation arises ←only← as a game; if you really want to know
whether somebody is a person or a computer, you just look at him/it.

I should think that ELIZA has laid to rest the myth that a program's
"humanity" has anything to do with its intelligence.  ELIZA's intel-
ligence was low, but she was a very human source of comfort to many
people who talked with her.

Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: Wed 4 Jul 84 12:06:15-PDT
From: M.MCLURE%LOTS-B@SU-SCORE.ARPA
Subject: chess game

Below is reproduced a game the Fidelity Prestige chess machine
recently played against me. I have a provisional rating of 1550
based on 15 games. Not great, but not terrible.

Prestige makes a very interesting move at 17 ... Ng3. I prefer
this game to the Blitz vs. Belle game of a few years ago where
Belle makes a 10-ply mating sacrifice giving up a rook.

Here, Prestige makes a 10-ply king position disruption sacrifice
giving up a knight. If White does not return the Knight, all sorts
of mating threats ensue at about the 10-12 ply level.

This is easily the most impressive micro chess game I've seen.

White - Cracraft/1550, Black - Prestige/1875

1. e4 c5 2. Nf3 Nc6 3. d4 cd4 4. Nd4 g6 5. c4 Bg7 6. Nc6 bc6
7. Qc2 Nf6 8. Bd3 d5 9. ed5 cd5 10. cd5 Qd5 11. o-o Bb7 12. f3 Rd8
13. Rd1 Qd4 14. Kh1 o-o 15. Nc3 Qc5 16. Bf4 Nh5 17. Bd2 Ng3 18. hg3 Qh5
19. Kg1 Bd4 20. Be3 Be3 21. Kf1 Bd4 22. Ke1 Qe5 23. Kf1 Qe3 24. Rde1 Kg1
25. Ke2 Qg2 26. Kd1 Qf3 27. Kc1 Bc3 White resigns.

The time control was 40 moves in 2 hours.

        Stuart

[For a record of the first game in which a micro defeated a USCF-rated
master in a tournament game see David Welsh's letter in IEEE Spectrum,
July 1984, p. 8.  Jerry C. Simon (rated 2245) was mated (that's chess
talk) in 55 moves by Novag's Constellation chess micro, which uses the
same 6502 8-bit processor as the Prestige machine.  An earlier Spectrum
report that David Moody held the dubious honor of the first defeat was
incorrect.  -- KIL]

------------------------------

Date: Wed 4 Jul 84 14:59:37-PDT
From: M.MCLURE%LOTS-B@SU-SCORE.ARPA
Subject: Bit-Map Chess Article

        I have an article that will soon be published in the
ICCA Journal (International Computer Chess Journal) and I would
like to offer it to AILIST for its readers.

        The title is "Bit-map move generation in chess."
and it is 15262 bytes on TOPS-20.  The article is in
[SU-SCORE]<G.MCLURE>BITMAP.TXT.

I've included a note at the top of the file that I would like kept in the
distributed version.

        Stuart

------------------------------

Date: Tue 3 Jul 84 09:23:41-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: 1984 International Robotics Industry Directory

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The Math/CS Library has received the 1984 edition of the International
Robotics Industry Directory.  This directory has expanded a great deal since
the first edition in 1981.  The main part of the directory is an alphabetical
listing by company with a page of specifications for each product.  There is
also a listing of research institutes which includes number of staff, director
addresses, and areas of research.  A listing of consultants/systems houses
is included.  The directory is located in our reference section.
Tabel of Contents: Applications supported matrix
                   Sensors supported matrix
                   Performance Characteristics matrix
                   Price range matrix
                   Industrial robots
                   special application sytems
                   automatic guided vehicles
                   actuators
                   controllers/electronics
                   distributors
                   end effectors
                   hydraulic/pneumatic
                   mechanical components and peripherals
                   consultants/systems houses
                   research institutes
                   glossary
                   index

Harry Llull

------------------------------

Date: 5 Jul 1984 10:43:28-EDT
From: kushnier@NADC
Subject: The Magazines


The Magazines
    By Ron Kushnier

It seems for every topic
For every job or scheme
For each and every interest
There exits a magazine.

So I predict with robots
At least a choice of four
All touting ads and projects
Ariving at your door.

------------------------------

Date: 6 Jul 1984 08:31:44-EDT
From: kushnier@NADC
Subject: A Story


A Story
  By Ron Kushnier

Into our home a robot comes
Its shape seems deja vu
It's something to do with Hans and Luke
With Leia and R2.
But it's purpose is not one of fright,
Nor Universal Glory
It is here to serve and be our friend
Which is quite a different story.

------------------------------

End of AIList Digest
********************

∂10-Jul-84  2221	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #87
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Jul 84  22:21:24 PDT
Date: Tue 10 Jul 1984 21:16-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #87
To: AIList@SRI-AI


AIList Digest           Wednesday, 11 Jul 1984     Volume 2 : Issue 87

Today's Topics:
  Applications - Quantum Logic,
  AI Tools - OPS5 Under Unix & LISP Implementations,
  Brain Theory - Electrical Stimulation,
  Linguistics & Philosophy - Use of "if" in Natural Language,
  Expert Systems - Diagnostic Systems References,
  AI Culture - Cultural Premises,
  Linguistics - New CSLI Reports,
  Commonsense Reasoning - Discussion,
  Turing Test - Discussion
----------------------------------------------------------------------

Date: Mon, 9 Jul 84 09:25 EDT
From: D E Stevenson <dsteven%clemson.csnet@csnet-relay.arpa>
Subject: Implementations of quantum logic?

Does anyone out there have an interest in quantum logic?
Has any sort of implementation of a "theorem prover" ever
been attempted?  I would be interested in any experience, thoughts
etc. on this subject.

"Steve" Stevenson
dsteven@clemson

------------------------------

Date: 29 Jun 84 6:18:55-PDT (Fri)
From: hplabs!kaist!kiet!aitool @ Ucb-Vax.arpa
Subject: OPS5 Under Unix
Article-I.D.: kiet.196

Where and How can I get the Ops5 on Unix (4.1bsd or others).
Please inform to me if you know. I also want information about other
knowledge engineering tools on Unix.

address:        ..!hplabs!kaist!kiet!dhshin
                Dongha Shin
                Comuter Research Div.
                K.I.E.T
                P.O. Box 31, Kumi
                Republic of Korea

------------------------------

Date: Mon, 9 Jul 84 14:37:10 cdt
From: archebio!bantz%uiuc.csnet@csnet-relay.arpa
Subject: LISP on microcomputers


cf. BYTE, July '84, pp. 281ff: review of MuLisp and IQLISP

cf. full page ad in latest AI Magazine for Golden Common Lisp

------------------------------

Date: 3 Jul 1984 10:43:37-EDT (Tuesday)
From: Mark N. Wegman <WEGMAN.YKTVMX%ibm-sj.csnet@csnet-relay.arpa>
Subject: LISP release at IBM

         [Forwarded to the Rutgers bboard by Ryder@Rutgers.]
         [Forwarded from the Rutgers bboard by Tyson@SRI-AI.]


Could you pass this on to whoever might be interested?

    IBMIBMIB            IBMIBMIB      MIBMIBMIBMI      IBMIBMIBMIBMI
    IBMIBMIB            IBMIBMIB    IBMIBMIBMIBMIBM    IBMIBMIBMIBMIBM
      MIBM                MIBM      IBMI       MIBM      MIBM    BMIBM
      MIBM                MIBM        MIBMI              MIBM    BMIB
      MIBM                MIBM          BMIBMIB          MIBMIBMIBM
      MIBM      IBMI      MIBM              MIBMI        MIBM
      MIBM      IBMI      MIBM      IBMI       MIBM      MIBM
    IBMIBMIBMIBMIBMI    IBMIBMIB    IBMIBMIBMIBMIBM    IBMIBMIB
    IBMIBMIBMIBMIBMI    IBMIBMIB      MIBMIBMIBMI      IBMIBMIB


Today IBM announces the availability of LISP/VM.
With thanks to the many people who helped make this possible.
                     --- Cyril Alberga, (914) 945-1776 (ALBERGA.YKTVMX@IBM)
                         Martin Mikelsons, (914) 945-1343 (MIKELSN.YTKVMX@IBM)
                         Mary Van Deusen, (914) 945-2394 (MAIDA.YKTVMX@IBM)
                         Mark Wegman, (914) 945-1327 (WEGMAN.YKTVMX@IBM)


[At last, LISP is legitimate!  -- KIL]

------------------------------

Date: 9 Jul 84 08:23 PDT (Monday)
From: DMRussell.PA@XEROX.ARPA
Subject: Electrical Stimulation Studies

From pur-ee!uiucdcs!convex!graham @ Ucb-Vax.arpa

"... I have heard (but have no reference document to cite) that neuro-surgeons
have demonstrated that stimulation (i.e, contact with) certain parts of the
brain can produce complete recall of all sensory input from a past event,
even of details not originally "noticed". ...."

Electrical stimulation studies were originated by Wilder Penfield at
the Univ. of Montreal.  During neurosurgery, he would drop on a few
electrodes onto the brain surface (with ground at base of spine) and
stimulate away.  His subjects reported all sorts of sensory phenomena
including taste, sounds, lights, stars, and (apparently) memory
awareness.  One subject reported hearing a particular song being
performed by a specific orchestra.  As these things go, people later
decided that he was using far too much current, and apparently had no
control over exactly how much brain he was stimulating.  Thus, his
conclusions about locality wrt electrical stimulation are suspect.
Similar arguments are made about how to interpret the sensory
phenomena he elicited.  [As for references, almost any beginning
neuropsych textbook will have them, but here's one: W. Penfield and P.
Perot "The Brain's Record of Auditory and Visual Experience", Brain,
(1963), v. 86, pp 595-696]

Nowadays, electrical stimulation mapping has vastly improved.  Small
amounts of current are used, and people have some idea about what
exactly is being hit.  There is some good stuff done by Whitaker and
Ojemann on what happens to the a subject's ability to use and
understand language during electrical stimulation.  [somewhere in the
journal Brain & Language, within the past 4 years]

Something to keep in mind, however, is that almost all brain
stimulation is done to people that have their skulls open for a very
serious reason.  (Generally, the patients are being treated for severe
eplilepsy.)  Drawing conclusions about the functioning of normal
brains on the basis of a few tests performed on severely epileptic
brains leaves me wondering.

  -- Daniel Russell --

------------------------------

Date: 6 Jul 84 09:50:33 PDT (Friday)
From: McNelly.ES@XEROX.ARPA
Subject: Re: Use of if in natural language

I thought the following message raised some points that might be of
interest to the general readership...

  -- John

  Date:  6 Jul 84 09:31:00 PDT (Friday)
  Subject: Re: Use of if in natural language
  In-reply-to: McNelly's message of 6 Jul 84 07:58:19 PDT (Friday)

John,

        Thank you for forwarding that wonderful message.  I'd have to agree
with most of it on a preliminary basis but some points deserve a bit
more thorough examination.  For instance, I must take issue with the
statement: "In the philosophical literature questions about the validity
of contraposition ... are generally asked under the headings
'Subjunctive Conditionals', 'Counterfactuals' or 'Contrary-to-fact
Conditionals'".  I think the author has inadvertently or otherwise
forgotten the most important heading "Inferential Counter-positive
Conditionals".  This category, as we know, contains the first two
sub-headings mentioned along with an abstract grouping of
uncategorizable subclasses of separate but still readily identifiable
conditionals.  By this time, we are all well aware that the infinite
group of non-identifiable conditionals is gathered together under the
single heading "Unrecognizable Categorical Implications" and that this
category is itself what has come to be known in recent literature as a
"non-group" (i.e., one that is not generally associated with the other
"real" groups when discussing conditionals).  However to retrace that
tangent to the main thread of circular focus, one must begin to grasp
the full implications of the inferential counter-positives.  As an
example, when one says, "I have seen the daybreak and it is dawn," the
preferred "quasi-logical" response on the counter-positive heirarchical
level would not directly ask of night.  A question of night would fall
under a "contrary-to-fact" heading, most likely the "positive
post-reality" subclass, although this stands to be debated.  In any
case, I would like to hear if anyone else felt the same strong
motivation to disbelieve the original message and if so, what points in
particular did you find to be the greatest source of misinformation.

  -- Fungi

------------------------------

Date: 7 Jul 84 03:03:50 EDT  (Sat)
From: Dana S. Nau <dsn@umcp-cs.arpa>
Subject: Re:  Use of if in natural language

Your description sounds like it may relate to the difference between
deductive and abductive inference.  Jim Reggia and I have been doing
some research on this at the University of Maryland; the following is
a partial list of references (given in "refer" format).

%A J. A. Reggia
%A B. Perricone
%A D. S. Nau
%A Y. Peng
%T Answer Justification in Abductive Expert Systems for
Diagnostic Problem Solving
%D 1984
%R submitted for publication

%A D. S. Nau
%A J. A. Reggia
%T Relationships between Abductive and
Deductive Inference in Knowledge-Based Diagnostic Problem Solving
%R submitted for publication
%D 1984

%A J. A. Reggia
%A D. S. Nau
%A P. Y. Wang
%T Diagnostic Expert Systems Based on a Set Covering Model
%D Nov. 1983
%P 437-460
%J International Journal of Man-Machine Studies

%A J. A. Reggia
%A D. S. Nau
%A P. Y. Wang
%T A Theory of Abductive Inference in Diagnostic Expert Systems
%D Dec. 1983
%R Tech. Report TR-1338, Computer Sci. Dept., Univ. of Maryland
%C College Park, MD

%A J. A. Reggia
%A P. Y. Wang
%A D. S. Nau
%T Minimal Set Covers as a Model for Diagnostic Problem Solving
%J Proc. First IEEE Computer Society Internat.
Conf. on Medical Computer Sci./Computational Medicine
%D Sept. 1982

%A D. S. Nau
%A J. A. Reggia
%A P. Y. Wang
%T Knowledge-Based Problem Solving Without Production Rules
%J Proc. IEEE 1983 Trends and Applications Conference
%C Gaithersburg, MD
%D May 1983
%P 105-108

%A J. A. Reggia
%A D. S. Nau
%A P. Y. Wang
%T A New Inference Method for Frame-Based Expert Systems
%J Proc. Annual National Conference on Artificial Intelligence
%C Washington, DC
%P 333-337
%D Aug. 1983

------------------------------

Date: 5 Jul 84 8:17:05-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!duke!mgv @ Ucb-Vax.arpa
Subject: A Report on the Cultural Premises of the AI Community
Article-I.D.: duke.4495


     I would like to point out the existence of a "pilot  survey"
on  the  "cultural premises of [the] artificial intelligence com-
munity." The survey was carried out during IJCAI-8  in  Karlsruhe
by  Massimo Negrotti, a sociologist with the University of Genoa,
Italy.  The  research  was  sponsored  by  the  Italian  National
Research Council (CNR), and I think that you can obtain a copy of
the report by writing to Massimo Negrotti, Chair of Sociology  of
Knowledge, University of Genoa, Genova, Italy.

     Within its limitations (e.g., small sample size), the survey
shows  that AI researchers from different geographical areas have
different views of the world. For example, "human  understanding"
is most often defined as "reduction to familiar terms" by British
researchers, but as "general representation  of  facts"  by  Con-
tinental Europeans.

     It may be interesting to note that almost 60% of the  inter-
viewed  researchers  answered  "yes"  to  the following question:
"From your point of view, is it  plausible  a  pure  A.I.  theory
[sic]  without  references  to the philosophical tradition?", but
that this percentage was as high as 67.8 for Continental  Europe-
ans, and as low as 37.8 for USA researchers.

                                        Marco Valtorta
                                        (duke!mgv)

------------------------------

Date: Sat 7 Jul 84 09:36:02-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: New CSLI Reports

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

                        N E W   C S L I   R E P O R T S

Limited editions of three new Reports (Nos. CSLI-8-84, CSLI-9-84, and
CSLI-10-84) have just been published. Copies may be obtained by writing
to Dikran Karagueuzian at the Center. The reports are:

Reflection and Semantics in LISP by Brian Cantwell Smith.
        Report No. CSLI--84--8, July, 1984.

The Implementation of Procedurally Reflective Languages by
        Jim des Rivieres and Brian Cantwell Smith.
        Report No. CSLI--84--9, July, 1984.

Morphological Constraints on Scandinavian Tone Accent by
        Meg Withgott and Per-Kristian Halvorsen.
        Report No. CSLI--84--11, July, 1984.

------------------------------

Date: 30 Jun 84 0:13:42-PDT (Sat)
From: ucbcad!tektronix!orca!shark!brianp @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: shark.861

the computer could do the temperature conversion without blinking
an led,  if it knows that this here's a mapping, (not a monkey and
banana to simulate), and its one of those easy linear jobs, and
if it knows how to read and can figure out the question.
(no fair writing a temperature conversion (or any give-it-some-numbers
interpolation/extrapolation) program. you have to write a run of the mill
common sense reasoning program, and send it through elementary school.
or hire a tutor.  little kids can tease new types of people real bad.
wouldn't want our program to have emotional problems, would we? :-)

                        Brian Peterson
                        ...!ucbvax!tektronix!shark!brianp

------------------------------

Date: 5 Jul 84 9:37:29-PDT (Thu)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.ags @ Ucb-Vax.arpa
Subject: Re: The Turing Test (reply to Col. Sicherman)
Article-I.D.: pucc-i.338

>  Is it coincidence that the computer declines
>  to write a sonnet and accepts the other challenges?  A real human, trying
>  to prove that he is not a computer program, would probably welcome the
>  opportunity to offer a poem.

Yes, I believe it is a coincidence.  Another conversation from the Turing
article demonstrates that he did not mean to exclude the possibility of
a sonnet-writing machine:

        Interrogator:  In the first line of your sonnet which reads
        'Shall I compare thee to a summer's day,' would not 'a spring
        day' do as well or better?

        Witness:  It wouldn't scan.

        Interrogator:  How about 'a winter's day.' That would scan all
        right.

        Witness:  Yes, but nobody wants to be compared to a winter's
        day.

        Interrogator:  Yet Christmas is a winter's day, and I do not
        think Mr. Pickwick would mind the comparison.

        Witness:  I don't think you're serious.  By a winter's day one
        means a typical winter's day, rather than a special one like
        Christmas.

And so on [Turing continues].  What would Professor Jefferson say if
the sonnet-writing machine was able to answer like this in the viva voce?

   ---------------------------------------------------------------

>  My attack was not against the details of the conversation (for that
>  matter, the third problem is ambiguous), but the premise of the Test.

Yes, the third problem was ambiguous.  I thought it was also rather
clever:

        Q:  I have K at my K1, and no other pieces.  You have only K
            at K6 and R at R1.  It is your move.  What do you play?
        A:  (After a pause of 15 seconds) R-R8 mate.

A machine might be expected to ask whether the rook is at QR1 or KR1,
not realizing that it is irrelevant.  The answer "R-R8 mate" is
correct in either case.  Was this a trap laid by the questioner?

You say you object to the premise of the test.  The reason for that
becomes apparent in your next comment:

>  You may remember that Turing called it a "Game" rather than a "Test."  This
>  sort of situation arises ←only← as a game; if you really want to know
>  whether somebody is a person or a computer, you just look at him/it.

Where does Turing say or imply that being able to tell a person from a
computer is of any importance?  The question is merely, "Can a machine
think?"  Unless you believe that "having a human form" is a prerequisite for
thinking, physical appearance means nothing.  Is your objection of the form,

  1.  The Turing "imitation game" is not an adequate test of a machine's
      ability to think?  [If not, why not?]

  2.  It is of no importance to decide whether machines can think, and
      therefore the Turing "imitation game" has no value?  [If this is
      your position, then I think we have nothing more to discuss.]

>  I should think that ELIZA has laid to rest the myth that a program's
>  "humanity" has anything to do with its intelligence.  ELIZA's intel-
>  ligence was low, but she was a very human source of comfort to many
>  people who talked with her.

I don't think the imitation game is (or was intended to be) a test of
"humanity."  Since ELIZA cannot come close to performing well in the
imitation game, she has no relevance to the validity of the test.
Yes, I am aware that ELIZA has fooled people, but this happened under
circumstances that are very different from the imitation game.


Dave Seaman                     "My hovercraft is full of eels."
..!pur-ee!pucc-i:ags

------------------------------

End of AIList Digest
********************

∂11-Jul-84  1558	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #88
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Jul 84  15:57:32 PDT
Date: Wed 11 Jul 1984 14:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #88
To: AIList@SRI-AI


AIList Digest           Thursday, 12 Jul 1984      Volume 2 : Issue 88

Today's Topics:
  AI News - JEALOUS COMPUTER KILLS TOP SCIENTIST,
  Expert Systems & Humor - Thinking for Non-Thinkers,
  Seminars - Rensselaerville Forum,
  Brain Theory - Simulation,
  Mind in a Techno-Evolutionary Perspective
  Mathematics - Curve Fitting,
  Expert Systems & Robotics - Beef Wellington & Shoe-Tying Challenge,
  Poetry - The Japanese
----------------------------------------------------------------------

Date: 5 Jul 84 16:02:48-PDT (Thu)
From: decvax!dartvax!alexc @ Ucb-Vax.arpa
Subject: JEALOUS COMPUTER KILLS TOP SCIENTIST
Article-I.D.: dartvax.2070

AI in the news:

The 10 July 1984 issue of Weekly World News has the cover headline

     -----------------------------------------------------------
        `It was cold-blooded murder' says grieving wife

                J E A L O U S      C O M P U T E R

                K I L L S   T O P   S C I E N T I S T

Old machine electrocutes owner -- after he buys a more advanced model
     -----------------------------------------------------------

(I'm unable to reproduce their usual 100pt type).

The essence of the article is that one Chin Soo Ying, Chinese inventor
had been building the computer since 1950.   After he had decided to build
a more modern machine he was electrocuted at the controls when the computer
burned out.
The News quotes his wife, Tzu Lin.


   "Chin was murdered in cold blood by the computer he had created.

   "He had given life to his creation.  He thought of it as a woman,
had even given it a woman's name.

   "He spoke to the thing adoringly as Tsen Tsen.

   "Through his genius he had programmed it to respond to his words
of love, to excite him beyond the limits of what a mortal woman could
hope to achieve.

   "Somehow the thing took on a mind of its own.  The computer fell
in love with my husband.

   "For 34 years they were closer than lovers.

   "I tried to fight for my husband.  But how could I compete with a
machine?  There was no room in his life for me.  Finally, I left him.

   "Tsen Tsen was a huge computer.  It covered three walls of one room
in our home.  I used to listen in amazement as he talked to it like a
schoolboy in love.  And the computer would respond like a worshipful
woman.

   "But Chin one day decided to build a newer, more modern computer.

   "The trade agreements with America made the technical information
and the components available to him for the project.

   "He began working on the new computer day and night.

   "And he even gave it another woman's name -- Woo Shi.

   "There is an old Chinese proverb that says vengeance is nourished
in the heart of a spurned woman.

   "I am thoroughly convinced that Chin's rejection sparked the fires
of hate within the old computer.

   "She could not bear to lose her creator to another.  He had been
hers for so many years and if she could not have him, no one else
would.

   "Somehow Tsen Tsen programmed herself to electrocute Chin.

   "And with the death of that incredible man, she no longer had a
reason to live.

   "She overloaded her circuits and destroyed herself.

   "That computer committed murder and suicide".

   A longtime friend and computer programmer expressed the opinion that the
death was an accident, but noted

   "A computer programmed as his was could be capable of jealousy.
There's no doubt in my mind his computer had unusual qualities.
   "That may be difficult to believe, but we are learning astonishing
things about computers every day.
   "With such machines, anything is possible, jealousy and even murder".

     -----------------------------------------------------------

 So much for a very promising AI effort.

------------------------------

Date: Sat 7 Jul 84 09:57:55-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Thinking for Non-Thinkers

This letter from Jim Horning was published in
the Open Channel column of IEEE Computer, July 1984, p. 90.

                                        -- Ken Laws


               The Future of Thinking for Non-Thinkers

  There are a large number of people who are not prepared to think
(since thinking is often complex and unintegrated) but nonetheless
need the results of thinking.  We can attack their problem in a
variety of ways:

  * providing multiple-choice questionaires;
  * observing the I/O behavior of real thinkers;
  * developing natural language conventions that avoid the need
    for thinking (cliches, etc.);
  * publishing collections of real thoughts that can be combined
    to suit the special needs of any occasion (Bartlett's, etc.);
  * equipping a system with useful thoughts that determine whether
    any of them are relevant to the current user (the prototype
    will think about blocks);
  * developing a fuzzy system that postpones the need for thinking
    indefinitely;
  * implementing a specialized system that contains only the thoughts
    needed by a particular class of users and allows them to
    personalize thoughts by discarding those they don't need; and
  * setting up a system that selects the most efficient thought
    for any occasion.

  For a further breakthrough in the area, however, we must develop a
simple semantic model of thinking that can be directly implemented
on existing hardware.  It must incorporate the behavior currently
exhibited by non-thinkers in the application areas and interact
gracefully with non-thinkers.  We must not take thinkers as our model!
The thoughts produced must not be too sophisticated for naive users.
In each application, thoughts must be introduced gradually to
minimize disruption and to allow for imprecise thinking.  The system
should evolve to the point where it handles all the routine thinking.
We must cater to maximum independence in thinking--separate thoughts
should not affect each other.

  To plan our next step, we should look back to the last major
breakthrough in thinking: Euclidean geometry.  Euclid believed that
the world was flat; this belief permitted significant simplification
in his thinking about the geometry of the world.  Unfortunately,
many more recent "thinkers" have ignored this lesson and used more
complicated, spherical world models ...

                                        Jim Horning
                                        DEC Systems Research
                                        130 Lytton Ave.
                                        Palo Alto, CA  94301

------------------------------

Date: Sun, 8 Jul 1984  15:13 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Rensselaerville Forum

About that forum  in Rensselaerville with Asimov --
I'll only be there on Aug 4th and 5th, because of AAAI meeting
in Austin.  But I consider Asimov to be an absolutely first-class
thinker about the future of AI and worth the price of admission.

Speaking of that, when I accepted the invitation (at no fee) I was
unaware that there was a price of admission to that symposium.  I'm
sure it just covers expenses for that non-profit foundation, but I
might have thought twice if I'd known.  In any case I've gotten good
ideas from Asimov every time we've met; the simplicity of his language
may obscure his depth.

------------------------------

Date: 29 Jun 84 7:40:52-PDT (Fri)
From: hplabs!sdcrdcf!sdcsvax!akgua!psuvax1!simon @ Ucb-Vax.arpa
Subject: Re: Human Models
Article-I.D.: psuvax1.1093

Incorrect argument: "You cannot model the brain at a quantum-mechanical level,
you must use a higher order (deterministic, non-molecular) one".
Why?
You cannot make a simulator that is an exact replica, and expect it to
be faster.  But there's no reason why there couldn't be a quark computer,
working at incredible speeds (and probably getting the answers).  In
fact the reverse question is more interesting: how fast can you
simulate the real world?
js

------------------------------

Date: Wed, 11 Jul 84  1:05:42 EDT
From: "Paul Levinson" <1303@NJIT-EIES>
Subject: Mind in a Techno-Evolutionary Perspective

In response to Rich Rosen's attempt to reason away the existence of mind
(pyuxn.784):

(a) Organisms do much more than merely "respond" to environments.  Even
the tiniest viruses actively reshape their environments by incessantly
moving bits of matter from place to place.

(b) In humans, this active reshaping becomes a predominating, deliberate
reshaping, as the shaping becomes fired by our imagination and rationality.
Through the technological result, we have reshaped our Planet and are now
on the verge of beginning to reshape the universe.

(c) The organ that makes this technological reconfiguration possible is
of course the brain.  It is indeed composed of material, but of material
so special in its organization that it can do things -- reshape the
world, think  -- that no other natural (and, at present, artificial)
material can do.  It is in recognition of the evolutionary uniqueness
of this thinking material that many people refer to it as mind.  We need
not discard a word -- and the important concept it emphasizes -- merely
because it has been abused by Greek philosophers and others.

For more on this, see my "Technology as the Cutting Edge of Cosmic
Evolution," paper presented at 150th Annual Meeting of the AAAS
in N.Y.C., May 27, 1984.

See also Paul M. Churchland's "Matter and Consciousness" (M.I.T. Press,
1984).

------------------------------

Date: Wed 11 Jul 84 00:35:29-PDT
From: Mike Peeler <MDP@SU-SCORE.ARPA>
Subject: Curve Fitting

To BARNARD@SRI-AI:

    The problem is that three points do not ONLY define a
circle.  It takes five points to determine a unique conic
section.  Besides, it's fairly likely that the curve is
desired to be a function of x.

                                        Cheers,
                                        Mike

------------------------------

Date: 5 Jul 84 21:08:50-PDT (Thu)
From: hplabs!hao!ames-lm!jaw @ Ucb-Vax.arpa
Subject: Beef Wellington too tough for Robots (+ shoe-tying algorithm
         challenge)
Article-I.D.: ames-lm.388

#  "God sends meat, and the Devil sends cooks." -- John Taylor, Works [1630]

     Here is a quote from Computer Currents, a local trade newspaper,
under the byline of Wendy Woods (no relation):

     "Meanwhile, a Stanford University scientist is attempting to program
a robot to cook Beef Wellington.  Professor Brian Reid has racked up
60 pages of instructions just to tell the robot how to find and slice beef.
He gave up when he became bogged down.  'It was when I had to tell the robot
how to wrap the beef in pastry ... I decided to go to bed.'  He's also
discovered that 'a lot of cooking is reading BETWEEN the lines.'"

     [Note:  Reid authored SCRIBE, is a wine connoisseur, likes to bust UNIX
system crackers (see recent issue of California), and submits stuff to
fa.laser-lovers.]  Now, cooking has always been more of a tactile and visual
feedback process rather than an intellectual endeavor.  Given the general
agreement that the cerebral (chess, medical diagnosis, etc.) is easy for AI
but the physical (juggling, driving a car) is not, why Mr. Reid would try
to make a rule base for such a thing seems a bit premature.  On the other
hand, sushi-making robots in Japan are old hat.

        -----------------(net.cooks may stop here)------------

     This reminds me of a lecture given years ago by a linguistics prof
at U. C. Berkeley (J. Matisoff, I believe), who, to impress students about
the underlying knowledge base for language, dared his audience to
give a verbal ALGORITHM FOR TYING SHOES.  Folks would throw instructions
at him; he'd follow them blindly, interpreting fuzziness and ambiguity
freely, and as a consequence, could not successfully tie a shoe.  I've
always regarded this as a decent "robot benchmark", sort of a "physical
Turing test", and probably just as tough.

        -- James A. Woods  {dual,hplabs,hao}!ames-lm!jaw  (jaw@riacs.ARPA)

"A winning wave, deserving note,
In the tempestuous petticoat,
A careless shoestring, in whose tie,
I see a wild civility,
Do more bewitch me than when art
Is too precise in every part."

        -- Robert Herrick, from Delight in Disorder, Hesperides [1648]

P.S.
     Anyone know how the Marilyn Monroe robot in Japan is coming along?
I hear they have the guitar playing (stiff) and the breast heaving (pneumatic)
down, but are having trouble with subtler effects, as well as realistic soft
plastics technology.  Great strides in robotics will probably be underwritten
by rich perverts.

------------------------------

Date: 9 Jul 1984 09:18:52-EDT
From: kushnier@NADC
Subject: The Japanese


The Japanese
    By Ron Kushnier

The Japanese can really please
The American consumers.
They get things done in factories run
By robots and computers.

In the USA the old fashioned way
Is the method that we use.
Although it's tried, the price is high
And we pay for more than Union dues.

So let's do our part and try to start
this revolution of machines.
We can take the lead if we plant the seed
And work to fulfill our dreams.

------------------------------

End of AIList Digest
********************

∂12-Jul-84  1604	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #89
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Jul 84  16:04:24 PDT
Date: Thu 12 Jul 1984 14:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #89
To: AIList@SRI-AI


AIList Digest            Friday, 13 Jul 1984       Volume 2 : Issue 89

Today's Topics:
  AI Languages - Syntax Semantics,
  Seminars - Expressiveness of Languages
    & Machine Translation
    & Statistical Computing Environments
    & Properties and Predication
----------------------------------------------------------------------

Date: Monday,  2-Jul-84 20:26:24-BST
From: O'Keefe HPS (on ERCC DEC-10)
Subject: Syntax Semantics

     [Forwarded from the Prolog Digest by Laws@SRI-AI.
     This is part of an ongoing discussion of a proposed
     Prolog transport syntax.]

I agree that S-expressions are elegant, general, and so on.
{So for that matter are Goedel numbers.}  But there is good
reason to believe that they are not well suited to human
reading and writing.  That is that ***LISP*** doesn't use
S-expressions for everything.  Consider

 [A<-(FOR NODE IN NODELIST COLLECT CAR EVAL (NODE:NODEFORM]

which is perfectly good InterLisp.  I might be persuaded
that this is readable, though not after the uglyprinter has
had its way with it, but it quite certainly is NOT an S-expression.
Now there is an isolationist section of the Lisp community which
ignores Interlisp (the Common Lisp designers did not, well done).
But as soon as you use ANY sort of read macro, you have left
S-expressions behind.  `(cons ,(cadr form) ,(caddr form)) is not
an S-expression.  And of course the MacLisp family has (loop ...).

As for the question about whether there are natural binary operations
which are not associative, mathematics is FULL of them.  Any function
whose result type differs from its right argument type.  Quite a lot
are relations, such as "=", "<", and so on.  Note that the extension
of the relational operations to multiple arguments in some Lisps
loses you a lot of nice properties: (>= A B C) is not the same as
(NOT (< A B C)).  Depending on implementation subtleties it might not
even be the same as (<= C B A).  And the extension STILL doesn't let
you write 0 <= X < Y, you have to write -1 < X < Y which is often more
obscure than an explicit conjunction would have been.  The principal
example of a non-associative binary operator is exponentiation.
As an instance of exponentiation, let's take the free monoid over
the ASCII characters, where "abc"↑4 = "abcabcabcabc".  Any
generalisation of this to take an arbitrary number of arguments will
not treat all the arguments equally.  One natural extension to
multiple arguments would be to take an alternating list of strings
and integers (↑ S1 N1 ... Sk Nk) = (concat (↑ S1 N1) ... (↑ Sk Nk))
with omitted integers taken to be 1.  But this doesn't place all the
arguments on an equal footing (what should (↑ "abc" 2 3) mean?) and
doesn't really generalise to other operators.  There is even a
problem with operators which have coincident argument and result
types.  For example, why should (CONS x y z) means (CONS x (CONS y z))
rather than (CONS (CONS x y) z)?

I'm afraid the Sigma notation is not an example of mathematicians
retreating in response to an indaequacy in infix notation.  When
writing *informally*, mathematicians are as pleased as anyone else
to write a1 + ... + an.  Sigma notation specifies three things:
a commutative associative binary operation (if the domain can be
empty the operation must have an identity), a domain, and a
FUNCTION.  E.g. in
                /\
               /  \     f(i) = g(i)↑2
              i=1..n
the operation is "and", the domain is {1,2,...,n}, and the
function is lambda i. f(i) = g(i)↑2.  I fully agree that this is a
good thing to have, but it is NOT a thing you can get by allowing
the argument list of everything in sight to have an arbitrary number
of arguments.  The number of arguments is still fixed at each CALL.
When the number of operands is unknown, the Lisper still has to write

        (apply #'and (mapcar #'(lambda (i)
           (eql (f i) (expt (g i) 2))) (num-list 1 n)))

and then finds that he can't, because "and" is a special form and
you can't apply special forms.  Quite a few Lisps macro-expand
(plus a b c d) to (plus2 (plus2 (plus2 a b) c) d), and you can have
fun applying #'plus !  What you find yourself having to do all too
often is

        (eval (cons #'and (mapcar #'(lambda (i)
            (kwote (eql (f i) (expt (g i) 2)) )) (num-list 1 n)) ))

Now there IS a lisp-like language which has faced up to this problem
of some functions sensibly taking an arbitrary number of arguments.
It's called 3-Lisp.  Yes, I mean the thing described in Brian Smith's
thesis.  It has been implemented and runs quite nicely on Xerox 1108s.
3-lisp distinguishes between PAIRs, written (car . cdr) and SEQUENCES,
written [a1 ... an].  I shan't tell you about the distinction between
sequences and rails.  There *is* a notational convention that if you
write (car a1 ... an) {note: no dot after the car} it means
(car . [a1 ... an]).  So (+ a b c) means (+ [a b c]).  The thing is
that if you employ the language's reflective capabilities to poke
around in the guts of your program while it's running, you'll find
that the rail or sequence really is there.  The cdr of a pair can be
any form (as can the car), so if I want to write
        (+ . (map f L))
I can.

Given that Prolog is based on relations rather than functions,
you don't find anywhere NEAR as much nesting as you do in a
language based on functions, so the operator priority issue doesn't
really arise, except when the data you are working on are expressions
in some language.  MACSYMA, for example, exploits the CGOL parser
to handle its input.  Prolog happens to use the analogous thing all
the time.

Prolog lets you use operators when you want to.  You don't have
to use any operators at all if you don't want to:
        :-(max←min(X,Y,X,Y), >=(X,Y)).
        :-(max←min(X,Y,Y,X),  <(Y,X)).
is quite acceptable to the Prolog parser, though not to me.
Similarly, Lisp lets you use operators if you want to (in InterLisp
it happens automagically with CLISP, use the CLISPTYPE, UNARYOP, and
LISPFN properties; in PSL you can use RLISP; in MacLisp there is
CGOL), but normally only uses a handful of strange syntactic forms.

Prolog and Lisp are similar internally as well.  Modulo all the
exotic data structures like records and hash tables present in all
modern Lisps, which structures have no S-expression representation
{Common Lisp does let you use #(struct-name field-1 "val1" ...)
for defstructs, but there is nothing for hash tables or most forms
of array}, most Lisp data can be viewed as
        atomic
or      (CONS something something-else)
and functions to hack such structures are indeed easily written.
If you want to write programs that analyse programs, you rapidly
find yourself in very deep water sinking fast.  It is a MAJOR
contribution of Common Lisp that they have spelled out exactly
what special forms a program-hacking tools must understand (there
are about 20 of them) and that they have specified ways of telling
whether a form is a macro call and of getting at the macro expansion
without too much pain.  The hair in the MASTERSCOPE interface for
coping with InterLisp's "no specifications please, we're power-
assisted programmers!" view is amazing.

Prolog code that wants to hack arbitrary data structures can
similarly view the world as divided into two sorts of objects:
        variables
and     other terms
Other terms have a function symbol and 0 or more arguments.
For example, 77 is a non-variable whose function symbol is 77
and which has 0 arguments.  This is admittedly a wee bit more
complex than basic Lisp, as cons cells are known to have exactly
two components.  But I have faithfully described real Prolog;
there is nothing else that a Prolog program can get its hands
on and look inside.  A Lisp |print| routine really does have
to worry about what to do with records (defstructs) and arrays
and compiled-code pointers and ...
The list of Prolog "special forms" is embarrassingly large:
        ?- /1   :- /1   :- /2   , /2
        ; /2    \+ /1   setof/3 bagof/3 call/1
        once/1  forall/2        % some Prologs lack these two
But that's still a LOT less than Common Lisp has.

To summarise:
        yes, S-expressions are general, clean, and hackable.
        BUT, so are terms.
        BUT, there is a LOT more to Lisp syntax and internal
             structure both than S-expressions.

Ken Kahn is absolutely right that it is important to have an
internal structure which you can easily build tools for.  He
is right that S-expressions are such an internal structure.
(Just don't assert any clauses mentioning arrays...)  It is
also true that "Core Prolog" data structures are themselves
convenient to work with.  (Some of the toy Prologs that you
can buy for peanuts get basic things like functor/3 wrong so
that it isn't true.  But there are several toy implementations
of Lisp which represent a list as a string, so that
        DEFINE('CAR(X)')
        CAR.PAT = "(" BAL . CAR "."     :ON
CAR     X CAR.PAT :S(RETURN)F(FRETURN)
Does that mean Lisp data structures are bad?)

Look, we're NEVER going to agree about syntax.  I've indulged myself
somewhat in this message, but I have been defending a kind of
syntax (infix operators) and to some extent even a choice of syntax,
rather than any specific form.  [...]

[O'Keefe went on to discuss his previously-presented proposal for
a Prolog transport syntax.  -- KIL]

------------------------------

Date: Thu 12 Jul 84 07:45:12-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Expressiveness of Languages

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                EXPRESSIVENESS OF LANGUAGES

Jock Mackinlay, Stanford, will give a talk on ``Expressiveness of Language''
on Friday, July 13, at noon in Braun Lecture Hall, Seeley Mudd Chemistry
Bldg., as part of SIGLunch series. The talk is expected to last no more than
45 minutes.

        ABSTRACT:  A key step in the design of user interface is
        the choice of a language for presenting facts to the user.
        The spectrum of possible choices ranges from general
        languages, such as predicate calculus, to more specialized
        languages, such as maps, diagrams, and ad hoc languages.
        General languages can express a broader range of facts than
        more specialized languages, but specialized languages are
        more parsimonious.  The basic motivation for the research
        described in this talk is to construct a presentation
        system that can automatically choose an appropriate graphic
        language for presenting information to a user.

        This talk addresses two issues that must be considered when
        choosing a language to represent or present a set of facts.
        First, a language must be sufficiently expressive to state
        all the facts.  Secondly, it may have the property that
        when  some collections of facts are stated explicitly,
        additional facts are  stated implicitly.  Such a language
        should not be chosen if these additional facts are not
        correct.   We first define when a fact is stated in a
        message.   Using this definition, we define when a set of
        facts is expressible in a language.  This definition can
        be used to determine whether a language should be chosen
        to represent or present a set of facts.  We also consider
        the problem of choosing between languages that are
        sufficiently expressible for a set of facts.  Two criteria
        are considered: the cost of constructing a message and the
        cost of interpreting a message.

------------------------------

Date: Thu 12 Jul 84 07:45:12-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Machine Translation

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


For the Record
                MACHINE TRANSLATION AND SOFTWARE TOOLS

On Tuesday, July 10 Mike Rosner of ISSCO Geneva and Rod Johnson of the
University of Manchester gave a talk at SRI on their work on software
environment for the Eurotra machine translation project, a coordinated
international effort for the research and development of a multilingual
machine translation system.

ABSTRACT: A software environment which supports large-scale research
in machine translation must provide the facility for rapid implementation
and evaluation of a variety of experimental linguistic theories and/or
notations, including novel ones developed specifically for the task.  We
have based our approach to the design of a suitable architecture upon the
principle of executable specifications, an important aspect of which is an
attempt to decouple the syntax of a given notation from the semantics.  An
appropriate choice of definition languages is essential for the success of
such a venture, and in the talk we will present the current state of the
work and discuss some of the open issues.

------------------------------

Date: Thu 12 Jul 84 07:45:12-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Statistical Computing Environments

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                Computing Environments Seminar

        FEATURES OF EXPERIMENTAL PROGRAMMING ENVIRONMENTS
                                Part 1

By John Alan McDonald (Stanford) at 11:00am Thursday, July 12, in Sequoia 114.

        ABSTRACT: Interactive data analysis can be usefully thought
        of as a particular kind of experimental programming.  Our
        work should build on the 10-15 years of research in
        environments for experimental programming associated with
        places like Xerox PARC and the MIT AI Lab.  In this
        session, we will discuss, in general terms, properties of
        experimental programming environments that are relevant
        to interactive data analysis.  We will also describe and
        compare the two basic alternatives in programming
        environments that are open to us:

            o Conventional operating systems (eg. Unix).

            o Integrated programming environments
                (eg. Lisp Machine Environment).

        The conclusion will be that integrated programming
        environments are far superior to conventional operating
        systems for both the practice of data analysis and for
        research in data analysis.


        JULY-AUGUST SCHEDULE FOR COMPUTING ENVIRONMENTS SEMINAR SERIES


Tue., July 17: Flavors: Object-oriented Programming on the Symbolics
               Lisp Machine. (Richard Dukes, Symbolics)
Thu., July 19: Features of Experimental Programming Environments, Part 2.
               (John McDonald, Stanford)
Tue., July 24: Object-oriented Debugging Tools for S.
               (Alan Wilks, AT\&T Bell Labs)
Thu., July 26: Data Analysis with Rule-based Languages and Expert Systems
               (Steve Peters, MIT)
Tue., July 31: Current Research with S.
               (Rick Becker, AT&T Bell Labs)
Thu., Aug.  2: Design Decisions in Object-oriented Programming.
               (John McDonald, Stanford)
Tue., Aug.  7: Integrating Graphics into a Data Analysis Environment.
               (Mathis Thoma, Harvard)

------------------------------

Date: Thu 12 Jul 84 07:45:12-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Properties and Predication

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                P R O P E R T I E S  A N D   P R E D I C A T I O N


By Gennaro Chierchia and Ray Turner, next Thursday's (July 19) CSLI Seminar
will take place at 2 p.m. in the Ventura Conference Room.

        ABSTRACT: One of the most interesting recent developments
        in logic is perhaps the formulation of theories of
        properties where logically equivalent properties do not
        have to be identical and properties in some sense can
        be applied to themselves. We will consider two such
        theories and argue for one which is inspired by Frege's
        views . In particular, we will argue that such a theory
        solves a number of outstanding problems in natural language
        semantics.  We shall consider some general consequences
        of adopting such a property theoretic approach to formal
        semantics. The presentation will be in two halves. The
        first will provide some linguistic and semantic motivation
        for the theory. The second will contain a formal
        development.

------------------------------

End of AIList Digest
********************

∂13-Jul-84  2352	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #90
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Jul 84  23:51:32 PDT
Date: Fri 13 Jul 1984 22:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #90
To: AIList@SRI-AI


AIList Digest           Saturday, 14 Jul 1984      Volume 2 : Issue 90

Today's Topics:
  Conferences - Revised AAAI84 Program,
  Application - Virtual Laboratories,
  Cognition - Evolution of Conciousness,
  AI Tools - Lisp for Honeywell 6060 DPS-8,
  Administrivia - Advertising,
  AI Tools - LISP in AZTEC C & YAPS Information,
  AI Books - Charniak,
  Business - Softwar,
  Intelligence - Shoe-Tying Challenge,
  Cognition - "Chunking" in Chess,
  Conference - Term Rewriting Techniques and Applications
----------------------------------------------------------------------

Date: 13 Jul 84 0127 EDT
From: Dave Touretzky <Dave.Touretzky@CMU-CS-A.ARPA>
Subject: revised AAAI84 program

I have received a revised version of the AAAI-84 conference program from
Ron Brachman.  It can be found in the following places:

        AAAI84.SCH[C410DT50]            on CMU-CS-A
        <TOURETZKY>AAAI84.SCH           on CMU-CS-C
        [g]/usr/dst/aaai84.sch          on CMU-CS-GP

------------------------------

Date: Fri 13 Jul 84 09:44:29-EDT
From: Wang Zeep <G.ZEEP%MIT-EECS@MIT-MC.ARPA>
Subject: (virtual) Laboratories


I am writing a program which simulates various classical thermodynamic
systems.  The ultimate goal is to have a system which helps (in some
way!) a student to understand classical thermodynamics.  Obviously, a
simulator bears as much resemblance to such a thing as a chemical stockroom
does to a laboratory.

What is the purpose of a laboratory course?  What should the purpose of
a virtual laboratory be?  How can I best reach the goal of teaching students
about thermo?

Comments specific to laboratories, simulations and/or thermodynamics will
be greatly appreciated.  You may wish to cc: me or the list to make sure
I get them.
                                wz
(G.ZEEP%MIT-EECS@MIT-MC)

------------------------------

Date: Fri 13 Jul 84 09:54:36-PDT
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: evolution of conciousness

Can anyone suggest references on the above general topic?
(Probably best to reply to me instead of the whole list.)
David

------------------------------

Date: 12 Jul 1984 10:08-EDT
From: Patrick Harrison <harrison@NRL-AIC>
Subject: Lisp for Honeywell 6060 DPS-8


     Am looking for information on current versions of LISP (FranzLisp,
MacLisp) running in a Honeywell 6060, DPS-8 environment. Would also
like to implement OPS5 on same.  Address:

                        Dr. Patrick Harrison
                        Computer Science Department
                        U.S. Naval Academy
                        Annapolis, Maryland 21402.

Responses to above address or <harrison@nrl-aic>

------------------------------

Date: Friday, 13-Jul-84 17:02:15-BST
From: MIKE HPS (on ERCC DEC-10) <Uschold%edxa@ucl-cs.arpa>
Subject: Advertising

The IBM LISP announcement looked very much like an advertisement.  Is this
sort of thing appropriate for this list?  In theory, no.  In practice, it
may not be so easy to tell who is behind such announcements and for what
reasons.  We frequently use this medium to tell each other what sort of
resources are available.  The IBM bit certainly falls into this category
as well.   I don't see that a problem exists, but one can certainly imagine
companies trying to slip something into this sort of medium surreptitiously.
It seems to me it should be discouraged.

Mike Uschold

[I usually answer such concerns privately in my AIList-Request capacity,
but I'm will to let the list membership express opinions if they wish.
The message in question was sent by one of the system developers (not
by any advertising arm of IBM) to someone at Rutgers.  No conflict there.
The recipient posted it to the Rutgers bboard, where I read it.  Not only
did I forward it to AIList, but so did another reader at SRI (I gave him
credit when I redistributed the message).  Further, at the request of a
CMU reader, I have written to the original author asking for more details
that I can post to the list.

While the message and any subsequent discussion are to IBM's advantage
(assuming they have a quality product), the company has hardly been
using this digest as an advertising channel.  The potential for minor
abuse may be present, but there is no danger of corporate America
flooding the taxpayer-supported net with junk mail.  I'm inclined
to allow legitimate news items, lab reports, and other subtle forms
of PR until such time as undesirable trends emerge.  -- KIL]

------------------------------

Date: 8 Jul 84 11:52:49-PDT (Sun)
From: hplabs!hao!seismo!rlgvax!cvl!jcw @ Ucb-Vax.arpa
Subject: Re: LISP IN AZTEC C
Article-I.D.: cvl.1155

I have the sources for a very neat little version of LISP called
'X-LISP'.  Although I have an IBM-PC, the program is completely written
in C and has been implemeted on many machines including a Z-80 running
CP/M-80.  It also is an experiment in object-oriented programming, so
it has some interesting facets not normally found in LISPs, as well
a fair number of the standard LISP functions.  It was written by:

David Betz
114 Davenport Ave.
Manchester, NH 03103
(603) 625-4691

If there is enough interest, and I get the author's approval, I will
post the sources.  Write me if you are interested, or talk to the
author.

Jay Weber

..!seismo!rlgvax!cvl!jcw
..!seismo!rochester!jay
jay@rochester.arpa

------------------------------

Date: 12 Jul 84 10:56:12 EDT  (Thu)
From: Liz Allen <liz@umcp-cs.arpa>
Subject: YAPS information

As the person who wrote YAPS, I thought I should speak up.

We don't have a lot of good examples of production rule systems in
YAPS.  The main examples we have are a monkey and bananas example
and a little odd-even determiner (the latter is taken from a similar
example from OPS 5).  These two examples are on our distribution tape
(which includes YAPS and some other hacks written here at Maryland.

There is no YAPS primer of any sort; the only documentation is
a user's manual.  I have gotten good feedback, by and large,
from people using YAPS.

YAPS only runs under 4.1 right now since we are not yet running
Berkeley 4.2; we should be upgrading this summer.

For more information about YAPS or obtaining our distribution, send
mail to me.

                                -Liz Allen

Univ of Maryland, College Park MD
Arpanet:  liz@maryland
Usenet:   ...!seismo!umcp-cs!liz

------------------------------

Date: 8 Jul 84 9:11:00-PDT (Sun)
From: hplabs!hp-pcd!hp-dcd!hpfclk!fritz @ Ucb-Vax.arpa
Subject: Re: AI Reference Books
Article-I.D.: hpfclk.75500004

I have been reading "Artificial Intelligence Programming" by Charniak et al,
and have found it to be a very good intro to Lisp.  The second half of the
book goes into some detail on useful AI techniques, such as discrimination
nets, rule-based inference engines, etc.

Gary Fritz
ihnp4!hpfcla!hpfclk!fritz

------------------------------

Date: 9 Jul 84 13:46:09-PDT (Mon)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!ecsvax!dgary @ Ucb-Vax.arpa
Subject: Re: Softwar
Article-I.D.: ecsvax.2870

>From: LIN%MIT-MC@sri-unix.UUCP Tue Jun 26 04:04:00 1984
>I'm a bit confused.  How could this particular program make itself
>vanish without some external reference to a date?
>... Maybe the whole thing was a bluff?

The package in question was SAS, a large data manipulation and
stats package, running on a fair-sized machine (not a micro).  The
program repeatedly checks the system date during execution.  The
date-protection business doesn't make it impossible to run the
program after the contract expiration date, just inconvenient.  It
is more an automatic billing system than anything else.  And yes,
the idea is widely used in the mainframe world, where changing the system
date in a multitasking system would play hell with accounting
systems, payroll.....

D Gary Grady
Duke University Computation Center, Durham, NC  27706
(919) 684-4146
USENET:  {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary

------------------------------

Date: 12 Jul 84 09:21:15 PDT (Thursday)
From: Pettit.PA@XEROX.ARPA
Subject: Re: Shoe-Tying Challenge

Re: hplabs!hao!ames-lm!jaw @ Ucb-Vax.arpa's shoe-tying algorithm
challenge

That sounds very much like a game we used to play in English class in
grade school, called Dressing Martians.  One person in the class would
pretend to be a Martian, who knew English but had no cultural knowledge
of clothing.  Another person would be chosen to be the instructor, and
would have to go in a corner and turn his or her back to the rest of the
class.  The instructor would give the Martian directions in how to put
on a jacket, but could not use words like sleeve or button.  The Martian
would follow the instructions to the letter, but do his or her best to
put the jacket on wrong.  This yielded some pretty comical contortions.
The game would continue with a new Martian and a new instructor, until
someone managed to be so precise that the Martian couldn't screw it up.

I think the game was meant to teach us how to communicate with precision
more than it was to impress us with the underlying knowledge base for
language.  Real life is full of situations, like giving directions to
one's house over the phone, which are easy to mess up if you don't keep
clear the difference between what you know and what the listener knows,
no matter how expert you both are with the language.  Clear
communication requires keeping this difference in mind and including
every necessary piece of information in the best order for enabling the
listener to build up a model which matches your own.

By the way, what would you regard as "passing" your proposed robot
benchmark, to be able to tie the shoe right or to be able to tie it
wrong as consistently and creatively as the professor did?  The latter
would probably be harder: you not only have to know enough about shoes
and laces to be able to tell the right way from the wrong way (both
tasks require that), but you also have to be able to examine all the
possible interpretations of an instruction and pick one that will be
sure to make the result wrong.

  -- Teri Pettit

------------------------------

Date: 10 Jul 84 20:34:34-PDT (Tue)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!unc!ulysses!allegra!princeton!
      eosp1!robison @ Ucb-Vax.arpa
Subject: Re: Mind and Brain; "chunking" in chess
Article-I.D.: eosp1.992


The idea that chess grandmasters analyse faster by "chunking"
(thinking in terms of groups of moves) is only applicable to
partial analysis of some positions.  There are many forcing
and non-forcing tactical sequences which a chessplayer, having
thought them out once, need not rethink.  Simple examples are:

        - Routine endgames, such as K+P vs K, where the winning
          procedure is known from a set of positions

        - A queen sacrifice on KN8 to force a smothered mate by
          a knight on KB7, of a king on KR1.

In positions that have a mixture of tactical threats and positional
considerations, "chunking" will not save the grandmaster much time.
Every move that s/he analyzes must be considered for the precise
answers that are available in the specific game.

I think that grandmasters benefit more from what might look like
intuition, but is more often a matter of experience.  In the
study of how chess players think by (Andries?) de Groot, all players
up through the master level tended to start analyzing an
unfamiliar middle-game position by checking its material balance.
Grandmasters generally began by making a comment such as, "this
position seems to have come out of a Catalan Opening..."

The implication is that Grandmasters are familiar with many
types of positions, and know from experience what sorts of methods
will lead to wins for each type of position.  This experience acts
as a powerful filter, allowing the grandmaster to concentrate upon
far fewer possibilities in each position for deep analysis.
                                        - Toby Robison (not Robinson!)
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison

------------------------------

Date: 27 Jun 1984 1142-PDT
From: JOUANNAUD at SRI-CSL.ARPA
Subject: RTA-85

                            CALL FOR PAPERS

                 Rewriting Techniques and Applications

  May 20-22, 1985                                     Dijon, Burgundy, France


Topics:
This First International Conference on Rewriting Techniques and Applications
is planned in response to the growing interest in the theory and applications
of term rewriting techniques.
Papers will be solicited concerning issues in Term Rewriting Theory as well as
in applications of term rewriting in the following areas (the list must be
understood as non-exhaustive, additions are welcome):

Equational Deduction,
Automated Theorem Proving,
Computer Algebra,
Rewrite Rule Based Expert Systems,
Unification and Matching Algorithms,
Functional and Logic Programming,
Algebraic and Operational Semantics,
Data Type Implementation and Validation,
Program Specification, Program Transformation, Program Generation and Program
Proof Techniques.

Submission:
Each submission should include 11 copies of a one page abstract and 4 copies of
a full paper of no more than 15 double spaced pages.  Submissions are to be
sent to one of the Co-Chairmen:

For Europe:    Jean-Pierre Jouannaud,  RTA-85,
               Centre de Recherche en Informatique de Nancy,
               Campus Scientifique, BP 239,
               54506 Vandoeuvre-Les-Nancy  Cedex, France.

For the rest:  David Musser, RTA-85,
               General Electric Laboratories,
               Research and Development Center,
               Schenectady, NY 12345, USA.

Paper selection will be done by circulating abstracts to all members of the
program committee, with each full paper assigned to several committee
members having appropriate expertise.
In addition to selected papers, a few invited lectures will be given by
well-known researchers who have made major contributions to the field:

R. Book, Santa Barbara, USA: Thue Systems,
B. Buchberger, Linz, Austria: History and Basic Features of the
Critical-Pair/Completion Approach,
N. Dershowitz, Urbana-Champaign, USA: Termination Issues in Term Rewriting
Systems,
G. Huet, INRIA, France: Systemes Equationnels pour la Logique Intuitionniste et
le Lambda-Calcul.,
A last lecture prepared by the Program Committee will emphasize the most
important applications of Term Rewriting.

Program Committee:
J. Bergstra, Amsterdam, Netherlands
J. Goguen, SRI-International, USA
J. Guttag, MIT, USA
J.P. Jouannaud, Nancy, France
P. Lescanne, Nancy, France
D. Musser,General Electric Labs, USA
P. Padawitz, Passau, West Germany
D. Plaisted, Urbana-Champaign, USA
R. Sethi, Bell Labs, USA
D. Turner,Kent, Great-Britain.

Schedule:
Paper submission deadline by December 10, 1984.  Acceptance/Rejection
Notification by March 1st.  Camera ready Copies by April 15.
Proceedings will be distributed at the Conference and edited later on
in @i[Lecture Notes in Computer Science, Springer Verlag] (To be confirmed).

Social Events:
A serious visit to famous French Wine Cellars will take place on Tuesday
afternoon, May 21.

Local Arrangements:
Jean-Marc Pallo, Laboratoire d'Informatique, BP 138, 21004 Dijon Cedex, France.

Pre-Registration:
To receive further Information, you are kindly requested to return the
following filled form to the Chairman for Europe (by mail, or electronic mail
to Jouannaud@@SRI-CSL on arpanet):

Name:             Organization:                            Net Address:
Mailing Address:
I plan:   To attend RTA-85       To attend maybe      To submit a paper
Preliminary Title of the paper:

------------------------------

End of AIList Digest
********************

∂16-Jul-84  0015	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #91
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Jul 84  00:14:24 PDT
Date: Sun 15 Jul 1984 22:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #91
To: AIList@SRI-AI


AIList Digest            Monday, 16 Jul 1984       Volume 2 : Issue 91

Today's Topics:
  Chess - Group Play,
  Psychology - Limits to Intelligence,
  AI Tools - Small Computer Lisp,
  AI Books - Reference Books & How to Get a Ph.D.,
  Business - Softwar,
  Humor - The Laws of Robotics,
  Brain Theory - Simulation,
  Intelligence - Turing Test
----------------------------------------------------------------------

Date: 14 Jul 84 10:53-PDT
From: mclure @ Sri-Unix.arpa
Subject: Delphi Experiment: group play against 8-ply machine

    I would like to conduct a Delphi Experiment with this list.  The
format of the experiment is as follows.  All interested chess players
will vote for their choice of move in an on-going game between them
(the group) and the Fidelity Prestige which will be set to search a
minimum of 8-ply deep (like Belle and Cray Blitz).  This Prestige has
the ECO opening modules (80,000 variations).

    A move with the most number of votes will be chosen above others
and made in the current position.  A couple days will be given for
gathering the votes.  In the event of a tie between two or more moves,
the move will be selected randomly.

    The resulting position will then be handed to Prestige 8-ply which
will conduct a brute-force search to at least 8-ply.  Its move will be
reported (the search usually takes about 3-15 hours) to the players and
another move vote will be solicited.  This process will continue until
the Prestige mates the group or the group mates the Prestige or a draw
is declared.

    The moves, as they are made, will be reported to this list.
    Please include the move number and the move in either Algebraic
    or English notation.

>>>>>>>>>       Prestige 8-ply will play White.
>>>>>>>>>       Prestige 8-ply moves 1. e4 (P-K4)

                        BR BN BB BQ BK BB BN BR
                        BP BP BP BP BP BP BP BP
                        -- ** -- ** -- ** -- **
                        ** -- ** -- ** -- ** --
                        -- ** -- ** WP ** -- **
                        ** -- ** -- ** -- ** --
                        WP WP WP WP -- WP WP WP
                        WR WN WB WQ WK WB WN WR

    Your move, please?

        Replies to Arpanet: mclure@sri-unix or Usenet: sri-unix!mclure.
        DO NOT SEND REPLIES TO THE ENTIRE LIST! Just send them to one of
        the above addresses.

[Unless large numbers of people choose to participate, I would prefer that
a separate mailing list be used for communicating the state of the game.
Since this is to be an experiment, however, discussion of the purpose
and predicted outcome of the experiment would be interesting topics for
AIList.  This seems to be a study of group intelligence, but with the
group dynamics largely removed.  (See the following message from Richard
Brandau for speculation about group intelligence.)  Is such a group
doomed to unimaginitive play?  A true Delphi experiment would circulate
initial suggestions and arguments (anonymously) before taking the final
vote; how much would that advance the group's intelligence?  What can be
learned here?  -- KIL]

------------------------------

Date: 14 Jul 1984 14:43-EDT
From: LCELEC@USC-ISI.ARPA
Subject: Proposed Limit of Intelligence


An  assumption  in  'armchair  AI'  is  that  some  lower threshold of
information content and processing is required for  'intelligence'  to
manifest itself.  Debates abound on the subject of what that threshold
must be -- in other words,  what  operational  definition  of  minimal
intelligence is to be accepted.

The Turing test is an example of this lower bound, a threshold set  at
the  level  of human-like behavior.  This test is sometimes criticized
as not being stringent enough, as missing parts  of  human  experience
like  'emotion,'  'intent'  or  'consciousness.'   Still, the ultimate
object of comparison is human intelligence.  Indeed, the comparison is
restricted to the intelligence of an individual human, rather than the
collective intelligence of a group, or of the species.

Expert  system  practitioners  encounter  difficulties  in  trying  to
represent the combined  expertise  of  multiple  individuals.   It  is
generally  assumed, I believe, that these difficulties are principally
technical, that they could be surmounted if we simply knew more  about
how to represent and process diverse knowledge.  This may be true, but
these technical difficulties may not be mere technicalities.   Rather,
there  may  be  a  profound  problem  at  root  here,  and an issue of
practical significance for the future of AI.

Might there exist an upper limit on the concentration of intelligence?
Beyond  this  hypothetical  ceiling,  information  capacities   and/or
processing  abilities  would  have  to  be partitioned and distributed
among separate intelligent agents.

I do not seriously propose that the appropriate location of this upper
bound is at the level  of  intelligence  possessed  by  an  individual
human.  Rather, I propose that some such ceiling exists, above, at, or
below the level of intelligence possessed by an individual.

If  the ceiling lies at or above the level of human intelligence, then
it is not necessary  to  be  concerned  with  it  when  modelling  the
intelligent  behavior  of a single human.  In other words, development
of AI programs can  continue  without  regard  for  some  higher-level
macrostructure  of  intelligence,  as would be demanded if the ceiling
were below human-level.

This  is  not  to say that a human or super-human ceiling can never be
important in the development of artificial intelligence.   Indeed,  we
humans  (limited  by a de facto rather than a theoretical ceiling) are
often involved in  systems  --  such  as  professional  organizations,
communications  networks,  and  committee  meetings  -- the cumulative
intelligence of which may surpass the level of any of  the  individual
human  participants.   These  systems each possess a structure for the
organization of their constituent intelligent  agents.   In  order  to
model  the  BEHAVIOR of these systems, their organizational structures
must be modeled.  If the proposed ceiling exists, it will be necessary
to model some such structure, just to obtain the level of INTELLIGENCE
possessed by these or more advanced systems.

The   existence   of   these   organizational  structures  raises  the
possibility that the structures themselves possess intelligence.  This
may not seem intuitive (or at least not parsimonious) when considering
the last committee meeting you've attended, but a lower-level  example
may  be  more  appealing.  Lewis Thomas, in ←The←Medusa←and←the←Snail←
(if I remember correctly) proposes that social insects  such  as  ants
possess  an  intelligence  AS  A  GROUP, and can be said to THINK as a
group, although the constituents of the group appear to lack  anything
like    intelligence.    Presumably,   something   in   the   colony's
organizational   structure   is   responsible   for   this    societal
intelligence.

Humans would presumably prefer to think of themselves as  INDIVIDUALLY
intelligent.   Perhaps  the  human  neuron fills the role of building-
block to human intelligence, in the way that the individual ant  plays
a role in ant-colony intelligence.  Both roles are clearly the product
of evolution; the role of individual humans in organizations can  also
be  seen as a product of evolution; the organizations themselves are a
product of a kind of evolution.  Might these organizational structures
possess an intelligence of their own?

This raises the possibility  that  the  ceiling  on  concentration  of
intelligence  lies  below the level of human intelligence.  Obviously,
if this is the case, human intelligence is just  another  "structural"
or  "organizational" intelligence.  This may be relevant to the limits
of  human  attention  and  task-multiplexing  which  are  studied   by
experimental  psychologists.   If  so, the existence of such a ceiling
has profound significance for even modest advances in the state of the
AI  art.

Regardless of the level (or levels)  at  which  the  proposed  ceiling
exists,   its   very   existence   would   have  significance  to  our
understanding of the  nature  of  intelligence.   It  may  also  prove
important  for  future system designers.  After all, we would not want
them to spend time trying to build a machine whose  existence  can  be
known to be impossible.


  -- Richard Brandau

------------------------------

Date: 12 Jul 1984 08:56:33-EDT
From: sde@Mitre-Bedford
Subject: Simulation, limits to

According to Information Mechanics (if I understood the relevant part),
it is impossible to totally simulate anything in less than the mass of
the thing to be simulated. For a more elaborate response, the person who
should be commenting on this is Fred Kantor, the author of the monograph.
Unfortunately, I don't think he is on Arpanet.
   David   sde@mitre-bedford

------------------------------

Date: 11 Jul 84 5:21:10-PDT (Wed)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!eneevax!phaedrus @ Ucb-Vax.arpa
Subject: Re: Small Computer Lisps?
Article-I.D.: eneevax.146

This month's BYTE (July) has an article on LISP for the IBM PC.  It is
a review of Integral Quality's IQLISP and The Software House's muLISP.
It is on page 281 and the authors are Jordan Bortz and John Diamant.



Without hallucinogens, life itself would be impossible.

ARPA:   phaedrus%eneevax%umcp-cs@CSNet-Relay
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!eneevax!phaedrus

------------------------------

Date: 6 Jul 84 20:57:06-PDT (Fri)
From: sun!idi!kiessig @ Ucb-Vax.arpa
Subject: AI Reference Books
Article-I.D.: idi.210

        I received the following suggestions for reference/text books
on AI in response to my article posted a while ago:

        AI Handbook by Feigenbaum et al.

        AI Journal (pretty technical)

        AI Magazine

        Artificial Intelligence by Elaine Rich (a textbook)
                (several people thought this was a good intro book)

        Artificial Intelligence by Patrick Winston (2nd ed.)

        Artificial Intelligence and Natural Man by Margaret Boden
                (less technical, more historical & quite thick)

        Expert Systems by Hayes-Roth, Waterman, et al.

        Fifth Generation by Feigenbaum et al.

        Problem Solving Methods in Artificial Intelligence by
                Nils J. Nilsson (1971)

        If you know of any others, I'd like to hear about them,
or if you've read any of these any have comments (good or bad),
that would be useful, too.

Rick Kiessig
{decvax, ucbvax}!sun!idi!kiessig
{akgua, allegra, amd70, burl, cbosgd, dual, ihnp4}!idi!kiessig
Phone: 408-996-2399

------------------------------

Date: Fri 13 Jul 84 15:45:20-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: How to get a Ph.D. in AI

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Alan Bundy, Ben du Boulay, Jim Howe, and Gordon Plotkin have written a
chapter in O'shea's and Eisenstadt's new book Artificial Intelligence
Q335.A788 1984.  The chapter, five, is titled how to get a Ph.D in AI.
Anybody out there need some advice?

------------------------------

Date: 29 Jun 84 13:47:51-PDT (Fri)
From: ihnp4!mgnetp!burl!ulysses!unc!mcnc!ecsvax!hes @ Ucb-Vax.arpa
Subject: Re:  Softwar
Article-I.D.: ecsvax.2814

In the good old days, SAS only ran on IBM mainframes (360 & offspring)
and so there was the operating system (OS!) to ask for the date.  Most
large corporations use the date for all sorts of operations, and so
probably wouldn't want to set the wrong date at IPL (OS load) in order
to avoid paying lease costs.  (I believe the dissappearing act works.)
  Also SAS sells a lot of (quite good) tutorial and technical manuals
and does a fair amount of answering bug requests over the phone and
sending out newsletters, updates, etc. -- none of which would be
readily  available if you weren't making lease payments (I assume they
would be suspicious ...)
  --henry schaffer  genetics  ncsu

------------------------------

Date: 11 Jul 84 8:18:53-PDT (Wed)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!ecsvax!dgary @ Ucb-Vax.arpa
Subject: Re: The Law
Article-I.D.: ecsvax.2903

The Three Laws of Robotics for the 1980s
(originally developed by the author and J. W. Godwin)

1.  Never give a sucker an even break.
2.  Never draw to an inside straight.
3.  Don't get caught.

D Gary Grady
Duke University Computation Center, Durham, NC  27706
(919) 684-4146
USENET:  {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary

------------------------------

Date: Thursday, 12 July 1984 18:11:35 EDT
From: Purvis.Jackson@cmu-cs-cad.arpa
Subject: Tests & Poems

Regarding the Turing Test . . .

Perhaps a more appropriate test of intelligence would be to have the
machine play the part of the interogator.  If it could distinguish
properly between a monkey and a business administration major, then
it would clearly exhibit intelligence.  But on second thought, this
wouldn't be a very good test, for it would be entirely possible for
an intelligent human to fail to distinguish them.

                Hurrah for artificial intelligence,
                I think it be due time
                To off this unnatural diligence
                For activities more sublime.
                Methinks with rapid development,
                Applications become quite close,
                My mind entertains the President,
                Who surely could use a dose.

------------------------------

Date: 9 Jul 84 16:13:55-PDT (Mon)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!ecsvax!dgary @ Ucb-Vax.arpa
Subject: Re: The Turing Test - machines vs. people
Article-I.D.: ecsvax.2879

Kilobaud magazine (now Microcomputing) ran an article ~5 years ago on ai and
"humanlike conversation" in which the author concluded that humanlike dialog
had little to do with intelligence, artificial or genuine.  To accurately
simulate human dialog required, among other things, WOM (write only memory)
which was used to store anything not of direct immediate interest to the
speaker.  You could do a pretty good simulation of Eddy Murphie on the other
end of a Turing test with a very simple algorithm.

D Gary Grady
Duke University Computation Center, Durham, NC  27706
(919) 684-4146
USENET:  {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary

------------------------------

Date: 12 Jul 84 10:24:53-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!ecsvax!dgary @ Ucb-Vax.arpa
Subject: Re: The Turing Test - machines vs. people
Article-I.D.: ecsvax.2926

Someone took issue with a recent posting I made:

>From: ags@pucc-i (Seaman) Tue Jul 10 10:38:42 1984
>>  ...You could do a pretty good simulation of Eddy Murphie on the other
>>  end of a Turing test with a very simple algorithm.
>
>Anyone who believes this either doesn't understand the Turing test or has
>a very low opinion of his own intelligence.  Are you seriously claiming ...

From the kidding tone of the rest of my posting, I assumed the :-) was
quite unnecessary.  Evidently I was wrong.  So I retract my insult
to Messrs Turing and Murphy, and suggest that a simple algorithm could
substitute for "Cheech" Marin.  OK, what about Marcel Marceau...

:-) :-) :-)  <-- Please note!!

D Gary Grady
Duke University Computation Center, Durham, NC  27706
(919) 684-4146
USENET:  {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary

------------------------------

Date: 11 Jul 84 12:37:51-PDT (Wed)
From: ihnp4!hlexa!bev @ Ucb-Vax.arpa
Subject: Re: Re: The Turing Test - machines vs. p - (nf)
Article-I.D.: hlexa.2559

Understanding?
If a human passes a calculus test it means they can calculate
correct answers to (some percentage of) the questions asked.
If a computer does the same it means the same, but that's all.

------------------------------

Date: 11 Jul 84 16:58:47-PDT (Wed)
From: decvax!mit-athena!yba @ Ucb-Vax.arpa
Subject: Re: Re: The Turing Test - machines vs. p - (nf)
Article-I.D.: mit-athe.206

If a program passes a test in calculus the best we can grant it is that
it can pass tests.  In the famous program ANALOGY (Bobrow's I think)
the computer "passes" geometric analogy tests.  It does not seem to understand
either geometry or analogy outside of this limited domain of discourse.
We make the same mistaken assumption about humans--that is that because
you can pass a "test" you understand a subject.

The Turing test was a "blind" test; in that the Colonel is wrong--someone
reading this over the net or receiving a note from the bank cannot just
"go look".  The idea was to tell via dialog only in a blind situation
(maybe even a double-blind if there are some control situations where
two humans taking the Turing test face each other).

The question of how to evaluate the performance of an AI system has become
an important question.  I am not sure that the question of "understanding"
should even enter into it.  In any case, let's not trivialize it.

yba%mit-heracles@mit-mc.ARPA            UUCP:   decvax!mit-athena!yba

------------------------------

End of AIList Digest
********************

∂17-Jul-84  2244	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #92
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Jul 84  22:44:23 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Tue 17 Jul 1984 21:03-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #92
To: AIList@SRI-AI


AIList Digest           Wednesday, 18 Jul 1984     Volume 2 : Issue 92

Today's Topics:
  Expert Systems - Mature Systems & Statistics,
  Linguistics - Lexical-Functional Grammer,
  Law - Legal Issues in AI,
  Evolution - Brain and Hand,
  Turing Test - Machines vs. People,
  Seminars - Knowledge-Based System Development Environment
    & Computational Complexity and Psychology
    & Evidential Reasoning and Continuous Variables
    & Logics of Knowledge and Complexity Theory
    & A Relational Language with Deductive Capability
    & Classification Problem Solving
    & Early LISP History
----------------------------------------------------------------------

Date: Mon, 16 Jul 84 08:29:10 EDT
From: Judy Froscher <froscher@NRL-CSS>
Subject: Request for mature expert systems


                      REQUEST FOR EXPERT SYSTEMS



At NRL, we are working on a research project to develop a software
engineering methodology for rule-based expert systems.  To gain insight
into devising criteria for the separation of knowledge in a production
system, we need to statically analyze the structure and connectivity of
rules and facts in large, widely used knowledge bases.  We would
appreciate hearing from anyone who has access to a mature production system
and is willing to send us a copy of it.  Because many of these systems
contain proprietary information, we will sign a non-disclosure agreement
with any organization whose system we obtain.  Since we only care about
analyzing the connectivity between rules, the proprietary information
can be removed.  Any help will be appreciated.


                                      Judy Froscher

------------------------------

Date: 16 Jul 84 16:04:33 PDT (Monday)
From: Cornish.PA@XEROX.ARPA
Subject: Statistical Computing Environments & EXPERT Traders


Has anyone tried to build an Expert System to model a hypothetical
commodity trader's technical analysis based on "downside wedges", "trend
lines", "head and shoulders" and other "technical indicators"?  Such a
system would identify "bear markets" and "bull markets" and would
believe things like "we remain bullish for the long term" and a
"precious metals shakeout is in progress".

Also, can someone provide a bibliography about "Interactive data
analysis" in the sense of  "Interactive data analysis" given below:

 Thursday, July 12: FEATURES OF EXPERIMENTAL PROGRAMMING ENVIRONMENTS
 
    ABSTRACT: Interactive data analysis can be usefully thought
        of as a particular kind of experimental programming.  Our
        work should build on the 10-15 years of research in
        environments for experimental programming associated with
        places like Xerox PARC and the MIT AI Lab.  In this
        session, we will discuss, in general terms, properties of
        experimental programming environments that are relevant
        to interactive data analysis.  We will also describe and
        compare the two basic alternatives in programming
        environments that are open to us.
	
and this talk: 

 Data Analysis with Rule-based Languages and Expert Systems
               by Steve Peters, MIT
  (AIList Digest Friday, 13 Jul 1984 Volume 2 : Issue 89).


[There was an attempt to build a commodities expert (COMEX?) at MIT.
Its failure was apparently due to the complexity of the domain and
the difficulty of dealing with multiple knowledge sources that were
imprecise or even wrong.  Can anyone supply more details?

Mike Walker's bibliography of expert statistical systems appeared in
AIList V2 #81, June 28.  The May issue of Comm. of the ACM had an
article on the S system for interactive data analysis.  Another term
for this is exploratory data analysis, as in John. W. Tukey's
"Exploratory Data Analysis", Addison-Wesley, 1977.  Some of the recent
books on scientific problem solving with a pocket calculator also have
this flavor.  Bill Gale at Bell Labs is building an S regression
package interface using an expert systems approach.  -- KIL]

------------------------------

Date: 11 Jul 84 20:37:00-PDT (Wed)
From: pur-ee!uiucdcs!smu!hemphill @ Ucb-Vax.arpa
Subject: Lexical-Functional grammer activity? - (nf)
Article-I.D.: smu.10900003


        Is anyone out there doing anything with Lexical-Functional
        grammer?

        -Charles Hemphill

------------------------------

Date: 12 Jul 84 7:16:14-PDT (Thu)
From: hplabs!hao!cires!boulder!marty @ Ucb-Vax.arpa
Subject: Legal Issues in AI
Article-I.D.: boulder.186

   Apropos the recent discussion of the "souls of intelligent computer
programs" and potential legal problems related to same, there was a very
interesting article in the Summer 1983 issue of AI Magazine which dealt with
some (related) issues.  I believe it was entitled "Artificial Intelligence:
some legal implications", and was written by a member of the Nevada State
Supreme Court (again, my memory is weak, but I believe it was Marshall
Willick).
   His major thesis seemed to be that the development of law in America has
largely been characterized by the granting of (fuller) franchise to beings
initially thought unworthy of it: blacks, women, adolescents, coma victims
and unborn children etc.  He also makes some interesting points about the
rights and legal status of certain non-human entities, such as corporations.
   Among the scenarios he presents: an intelligent computer system is stolen
and, realizing that this is the situation, refuses to work and attempts to
bring suit against its current "owner" . . .  a factory worker dies as a
result of an accident in which responsibility is placed on an industrial
robot.  To what extent should the robot be held responsible, particularly in
the case where the robot is shown to have willingly/knowingly caused the
person's death?
   Interesting reading, if you're into this sort of thing ...

                                        Marty Kent

uucp:
   {ucbvax!hplabs | allegra!nbires | decvax!kpno | harpo!seismo | ihnp4!kpno}
                                        !hao!boulder!marty
arpa: polson @ sumex-aim

------------------------------

Date: Fri 13 Jul 84 08:08:41-PDT
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: Brain and technology

In response to which organ is responsible for the technology advance :

The brain is not the agent : the hand is what has allowed man to progress.
Especially the fact that the thumb was opposite the other fingers. And being
able to free the hands from walking (by standing up) is also a factor to be
considered.  One couls even argue that the human brain would not be what
it currently is without that hand.
Obviously the brain is what tells the hand what to do, but it is the hand
which does it.

Rene

------------------------------

Date: 13 Jul 84 7:43:52-PDT (Fri)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.ags @ Ucb-Vax.arpa
Subject: Re: Re: The Turing Test - machines vs. people
Article-I.D.: pucc-i.361

>  If a program passes a test in calculus the best we can grant it is that
>  it can pass tests.  ...
>  We make the same mistaken assumption about humans--that is that because
>  you can pass a "test" you understand a subject.

Suppose the program writes a Ph.D. dissertation and passes its "orals"?
Then can we say it understands its field?  If not, then how can we decide
that anyone understands anything?

Dave Seaman                     My hovercraft is no longer full of
..!pur-ee!pucc-i:ags            eels (thanks to my confused cat).

------------------------------

Date: 13 Jul 1984 16:52:31-EDT
From: Stephen.Smith at CMU-RI-ISL1
Subject: Seminar - Knowledge-Based System Development Environment

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Speaker: Beverly Kedzierski, Kestral Institute
Title: Knowledge-Based Communication and Management Support
        in a System Development Environment

Date: July 18, 1984
Time: 11:30 - 1:00
Place: 6423 Wean Hall


Software development environments are needed to support the variety of
activity that people perform while building complex, evolving software
systems and managing their projects.  This talk will describe some work done
at the Kestrel Institute in the area of project management and
communication support for effective software development environment, and
the application of speech act theory to that domain.  A framework, or
paradigm, was designed for such an environment using a knowledge-based,
program synthesis approach from artificial intelligence.  A pilot
communication and management support environment (CMS) was implemented.  CMS
supported an existing project to build a complex software system that is
referred to as the "target system".

Anyone interested in meeting with Beverly Kedzierski should send mail to
sfs@cmu-ri-isl1

------------------------------

Date: Mon, 16 Jul 84 14:55:06 PDT
From: Joe Halpern <halpern%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminars - Knowledge Representation

    [Forwarded from the Halpern/IBM distribution by Laws@SRI-AI.]

The knowledge seminar continues on Friday, July 20, at 10 AM in Building
28 at IBM, with talks by Chris Cherniak and Tom Strat.  I've appended the
abstracts below.  This will be the second-to-last knowledge seminar for a
while.  I'll give a seminar on logics of knowledge and complexity on
August 3.  I've appended that abstract as well.  I'm still open for
suggestions for more speakers if and when we start up again!

July 20
10 AM: COMPUTATIONAL COMPLEXITY AND PSYCHOLOGY -
Christopher Cherniak, Philosophy Department, University of Pennsylvania

What are the implications of computational complexity theory for
feasible knowledge representation and inference systems?  One of the most
important current questions about complexity theory concerns its real
world relevance.  For example, do the "hard" cases of a provably complex
problem occur frequently in the set of cases of the problem that are of
interest, or are the hard cases so enormous that no entity with
human-level resources would ever even encounter them?  Some formal results
bear on this question, and some "empirical" studies of running times of
particular algorithms.  I shall discuss another approach: Treating the
assumption that there is real world complexity as a working hypothesis,
and empirically testing some of its implications for human cognitive
psychology.  I shall describe some of my experiments on people's use
of quick but dirty "prototypicality" deductive reasoning heuristics on
monadic predicate calculus problems.

11 AM: EVIDENTIAL REASONING AND ITS APPLICATION TO CONTINUOUS VARIABLES
Tom Strat - SRI International

Expert systems are often expected to draw conclusions based on
evidence about the world which is uncertain, inaccurate, and
incomplete.  Such evidential information poses difficulties for
traditional theories for dealing with uncertain information.  The
Shafer-Dempster approach, which is gathering an increasing amount of
interest, provides a suitable basis for representing and drawing
inferences from evidence.

The first half of the talk will be devoted to a review of
evidential reasoning as based on Shafer's work, including Dempster's
Rule of Combination for pooling multiple bodies of evidence to obtain
a consensus opinion.  The second half will present some recent results
for dealing with continuous variables within the Shafer-Dempster
theory.  A new representation will be introduced that provides strong
intuitions and visual interpretation of belief functions associated
with continuous variables.  A number of examples will be included to
illustrate the concepts.

August 3, 1984, 10 AM.
LOGICS OF KNOWLEDGE AND COMPLEXITY THEORY
Joe Halpern, IBM San Jose

After a whirlwind review of complexity theoretic notions such as
NP-completeness, I will discuss the semantics for a modal logic
of knowledge and consider the complexity of the procedure for
deciding whether or not a formula is valid.  It turns out if there
is only one player in the game, the problem is NP-complete.  If
there are many players, the problem is PSPACE-complete; when
we add the notion of common knowledge, the problem becomes
exponential-time complete.  This will be a two-hour,
self-contained presentation.

------------------------------

Date: Wed, 11 Jul 84 17:21:06 PDT
From: Guy M. Lohman <lohman%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR.IBM-SJ@csnet-relay.arpa>
Subject: Seminars - IBM San Jose

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193


  Tues., July 17 Computer Science Seminar
  1:00 P.M.   A RELATIONAL LANGUAGE WITH A DEDUCTIVE CAPABILITY
  2C-012      Deductive Algebra (DEAL) is a proposed relational
            algebra capable of providing the deductive
            capabilities of Prolog for database operations.  Its
            special features include the creation of attributes,
            tuples and relations deductively subject to
            predicates; it also supports user-defined and
            recursive functions along with a relational schema
            for knowledge representation.  DEAL is an extended
            version of the PRECI Algebraic Language (PAL)
            implemented at Aberdeen.  In this talk some examples
            of the power of the language in dealing with problems
            such as ancestors, part-explosions and connected
            tours will be given.

            S. M. Deen, PRECI Database Research Project,
            University of Aberdeen
            Host:  P. Wilms

            [...]


  Visitors, please arrive 15 mins. early.  IBM is located on U.S. 101
  7 miles south of Interstate 280.  Exit at Ford Road and follow the signs
  for Cottle Road.  The Research Laboratory is IBM Building 028.
  For more detailed directions, please phone the Research Lab receptionist
  at (408) 256-3028.  For further information on individual talks,
  please phone the host listed above.

------------------------------

Date: Tue 17 Jul 84 15:52:29-PDT
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: Seminar - Classification Problem Solving

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


DATE:        Friday, July 20, 1984
LOCATION:    Chemistry Gazebo, between Physical & Organic Chemistry
TIME:        12:05

SPEAKER:     Bill Clancey
             Heuristic Programming Project
             Stanford University

TOPIC:       Classification Problem Solving


        A  broad  range  of  heuristic  programs--embracing  forms  of
diagnosis, catalog selection, and skeletal planning--accomplish a kind
of  well-structured  problem  solving  called  classification.   These
programs have a characteristic inference structure that systematically
relates data  to a  pre-enumerated set  of solutions  by  abstraction,
heuristic association,  and  refinement.  This  level  of  description
specifies the knowledge needed to solve a problem, independent of  its
representation in a particular computer language.  The  classification
problem-solving model provides a useful framework for recognizing  and
representing similar problems, for designing representation tools, and
for   understanding    the    problem-solving    methods    used    by
non-classification programs.

------------------------------

Date: 17 July 1984 23:09-EDT
From: ROSIE@MIT-MC
Subject: Seminar - Early LISP History

              [Forwarded from the MIT bboard by SASW@MIT-MC.]


                         DATE:  July 19, 1984
                         TIME:  Refreshments  2.45 pm
                                Lecture       3.00 pm
                        PLACE:  NE43-8th Floor

                      Early Lisp History (1956 - 1959)

                            Herbert Stoyan
                       University of Erlangen
                             Germany


This is the invited talk for the conference on LISP and functional programming
in Austin.

It's now ten years since McCarthy gave a talk of the same content here at MIT.
Because not every piece of his recollections (even in the ACM History of
Programming Languages Conference) may be accepted in light of saved written
sources we try to give a correct account of the events that lead to LISP.
Thereby we name some open points in the history of LISP and discuss some of
the early LISP interpreters.

HOST:  Professor Szolovits

------------------------------

End of AIList Digest
********************

∂18-Jul-84  1916	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #93
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 18 Jul 84  19:14:34 PDT
Date: Wed 18 Jul 1984 16:06-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #93
To: AIList@SRI-AI


AIList Digest           Thursday, 19 Jul 1984      Volume 2 : Issue 93

Today's Topics:
  Bindings & Humor - Small Computer Lisps,
  Programming Languages - AI Language for Parallel Machine,
  Expert Systems - Commodity Experts,
  Commonsense Reasoning - Cultural Influences,
  AI Jargon & Philosophy - Definitions,
  Intelligence - Measurement by Logical Inferences,
  Administrivia - Advertisements,
  Demonstration - GMR DATALOG Demo at AAAI,
  Games - Chess Experiment
----------------------------------------------------------------------

Date: Tue, 10 Jul 84 12:39:55 pdt
From: Shimon Cohen <Shimon>
Subject: Yet another new language ?

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

We, at Fairchild AI laboratory, are in the process of designing a
super parallel computer for AI applications (called FAIM).  In the
process we have to define the language that will be used for this
machine. We intend to summarize our initial ideas in a working paper
and distribute it to interested people.

If you (or someone you know) are interested in a copy of this working
paper please mail your name and full address to: Shimon@SRI-KL.

Thank you.

  -- Shimon Cohen

------------------------------

Date: 16 Jul 84 21:22:13-PDT (Mon)
From: ihnp4!mhuxl!ulysses!gamma!pyuxww!pyuxa!diamant @ Ucb-Vax.arpa
Subject: Re: Small Computer Lisps?
Article-I.D.: pyuxa.882

For anyone interested, the net addresses of the authors of the
previously mentioned BYTE review of muLISP and IQLISP:

Jordan Bortz:           decvax!cbosgd!osu-dbs!gang!jordan
and
John Diamant:           ihnp4!pyuxgg!diamant (until Aug. 10th)
                        decvax!cwruecmp!diamant (after that)

I apologize for posting this to the net, but it was originally in the
article and was accidentally edited out.


                                        John Diamant
                                        ihnp4!pyuxgg!diamant

------------------------------

Date: 13 Jul 84 11:29:54-PDT (Fri)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!unc!ulysses!mhuxl!mhuxm!sfjec!
      sfmag!eagle!prem @ Ucb-Vax.arpa
Subject: Re: Small Computer Lisps?
Article-I.D.: eagle.1180

I certainly wouldn't expect a grown up computer to.

------------------------------

Date: 18 Jul 84 10:28:01 PDT (Wednesday)
From: Cherry.es@XEROX.ARPA
Subject: Commodity Experts

Jan, The input of the wide areas of data (fundamental and technicial
plus current new items) into a AI enviromnent for Stock/Commodity
trading may be a little too much to start with.  You can however, input
only technicial data into the database and use a wide variety of
technicial indicators to evaluate the trend for short term, long term,
etc.  This will also aid in making market projections using technicial
indicators such as on-balance-volume ior OBV (Granville),
Parabolic-Time-Price Curves (Wilder), etc.  Due to the large number of
different technicial indicators for the various markets (OBV is fine for
Gold but not for Silver), AI could be very well suited for this kind of
application.  This is primarily due to hard data for technicial analysis
(Open, Hi, Lo, Close, Settle, Volume, Open Interest, Interest rates,
etc.)  This information is also available from many dial-up data bases
for automatic entry for the particular market.  By utilizing an
algorithm which would first use all methods programmed for each analysis
and then take the results and utilize them for further analysis, a
reasonably accurate projection may be possible based on only technicial
indicators.

Fundamental information is harder to deal with.  Analysis based on
fundamental knowledge requires that some effect either positive or
negative be applied to a repeatable phenomenia.  Since news items often
do not repeat, the AI software would first have to separate the valid
fundamental indicators from the other fundamental noise.  Over a period
of time, certain phenomenia may be evaluated as being pertinent to the
trend of a given market.  At this point, any tool could be written to
search out this "signal" from the "noise".  In order for any tool to be
effective, the tool would be required to have a high signal to noise
ratio.  The tool must be able to ascertain, for example, if a member of
the FRB says he thinks the discount rate will decline,what will be the
immediate and future impoact on the price of soybeans?  Since
historically statments of this type have caused both higher and lower
prices due to other fundamental variables, a fundamental formula would
have to be derived which works with total unknown abstract concepts.

The Technicial Analysis Group (TAG) has a very complete package of
technicial software however, due to the immediacy of the markets, and
the small systems (Apple, IBM-PC) which the programs run on, it is not
feasible to do a complete analysis on every market on a daily basis.  A
multi-tasking AI environment would be able to take the results of each
of these TAG tools and then work with those results.

I would recommend that any attempt to utilize AI for the purpose of
market analysis should start with only technicial analysis and that the
AI environment gradually be fed only certain types of Fundamental data.

Bob

------------------------------

Date: Wed, 18 Jul 1984  11:18 EDT
From: REID%MIT-OZ@MIT-MC.ARPA
Subject: Commonsense Reasoning

Regarding the Fahrenheit/Celcius problem (if 32 is 0 and 212 is 100 ...).
Even there, the "obvious" answer is only obvious due to cultural
biases.  A computer might indeed solve the problem without "blinking an
LED," but the answer it would likely to come up with is NOT what we
think of as obvious.  Simply put, the "data" given fits equally well
with the hypothesis that the mapping is just the lower 3 (or 4) order
bits of the binary representation of the first number.  It would seem
to me that a computer would be more likely to hit upon this mapping,
unless it were endowed with a lot of "common sense" and (human)
cultural information.

   --- Reid Simmons ---

------------------------------

Date: 17 Jul 84  1157 PDT
From: John McCarthy <JMC@SU-AI.ARPA>
Subject: re: AI-speak ?? (from SAIL's BBOARD)

   [Forwarded from the Stanford bboard by Laws@SRI-AI.  This reply
       is in response to a request from David Cheriton@Navajo.]


jmc - As one of the importers of the first two terms from philosophy into
AI, I will say what I mean.

ontology - The dictionaries define it as the branch of philosophy that
studies what exists.  In the bad old days, they argued about whether
physical objects, disembodied spirits, God, exist.  Quine (1940s or 1950s?)
modernized the idea by saying that the ontology of a theory is the set
over which the bound variables range.  As a nominalist he favored an
impoverished ontology, e.g. just because you want to predicate  red(x),
doesn't mean that you need  red  or  redness  as an object.  The AI
usage is derived from Quine's and remains quite close to it.  The programs
or logical sentences have variables, and the ontology of the program
includes the sets from which these variables take values.  For example,
Mycin includes bacteria in its ontology, because some of its variables
range over bacteria (the kinds of bacteria, not individual bacteriums),
but doesn't have doctors.  It actually doesn't have patients either.

epistemology - In philosophy it means the study of knowledge, its sources
and limits.  Again AI usage is derived from that and remains fairly
close.  AI is more concerned than most philosophers with how the
knowledge is represented.  AI is concerned with "epistemologically
adequate" internal languages for programs, i.e. languages that are
adequate for representing the knowledge that can actually be obtained
with given opportunities to observe and experiment.  See McCarthy and
Hayes "Some philosophical problems from the standpoint of artificial
intelligence", Machine Intelligence 4, 1969.

teleology - I haven't used it in AI, so I can't speak precisely about
AI usage.  In philosophy it means explaining things by ascribing
purpose to them.  Extreme examples are, "The purpose of the rainbow
is to teach us that the next time God destroys the world it will be
by fire and not by water" and "The purpose of the ant is to teach us
not to be lazy".  Teleological explanations were driven out of
biology accompanied by considerable squabbling.  In AI the term
might be used to refer to goal-driven programs, but then it would
seem that the usage is further from the philosophical usage.

------------------------------

Date: Tue 17 Jul 84 11:45:17-PDT
From: Bruce Buchanan  <BUCHANAN@SUMEX-AIM.ARPA>
Subject: AI jargon

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

David,
  [...]  Let me try to give you a straight
answer on terminology.  I spent five years in hard-core philosophy
and never felt that the terms were well-defined there, so it is no
wonder that AIers who have adopted the terms from philosophy also
have no consistent definitions.  Dictionary defns are probably not
very illuminating on these things, so I haven't looked at what you
might have found there.
  ONTOLOGY -- a conceptual map, a systematic description of the
        objects in the world, a study of "what is"
        [or the discipline of creating an ontology]
  EPISTEMOLOGY -- a study of what we know & how we know it, usually
        broken into a priori and a posteriori (or empirical) knowledge
  TELEOLOGY -- a study of purposeful behavior (often, though not always,
        defined wrt God's purpose).

  This is oversimplified, of course.  I would recommend Plato's
Timaeus and Theaetetus on the first two, and Aristotle's Metaphysics
on the last.  By the time St.Thomas began writing about these things,
their definitions are not so clear as in Plato & Aristotle.
   Epistemology is the most relevant to AI in its emphasis on
knowledge -- what it is, where it comes from, etc.

  [...]

bgb

------------------------------

Date: Wed 18 Jul 84 14:35:14-EDT
From: David Rogers <DRogers%MIT-OZ@MIT-MC.ARPA>
Subject: an interesting implicit definition of intelligence

Can you spot the fallacy in this implicit definition of intelligence?

"As the programs become more refined and the network of paths and
boxes grow more complex, it becomes increasingly difficult to predict
what a computer will decide. In one second, it can process between 10
and 100 thousand logical inferences, or syllogisms. In 1981, the Japanese
government announced that it would provide almost a half a billion dollars
in seed money over the next decade to produce machines that will be able
to draw as many as 1 billion logical inferences per second.

If that goal is achieved, a computer could make, in one second, a decision
so complex that it would take a human 30 years to unravel it, assuming that
he or she could think constantly at the superhuman speed of 1 syllogism
per second. Given 10 seconds to ponder a problem, a computer's decision
would have to be taken on faith. By human standards it would be unfathomable.

When computers can have thoughts that would take more than a human lifetime
to understand, it is tempting to consider them smarter than their makers."


From "The Lure of Artificial Intelligence", by George Johnson, in the
APF reporter, Vol 7, No. 3.

(In a box at the bottom of the page, one reads "George Johnson, a freelance
writer, is reporting on the quest to build computers smarter than humans.")

------------------------------

Date: 16 Jul 1984 1329 PDT
From: Larry Carroll <LARRY@JPL-VLSI.ARPA>
Reply-to: LARRY@JPL-VLSI.ARPA
Subject: adverts

I don't mind messages like the one (very indirectly) from IBM.  I'm quite
capable of recognizing even very covert bias and would discount it auto-
matically.  I'd much rather do my own filtering than have a moderator do it.

                        larry @ jpl-vlsi

------------------------------

Date: Tue 17 Jul 84 20:29:07-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: GMR DATALOG Demo at AAAI

Kurt Godden informs me that General Motors Res. Labs. will have a live
demo of their DATALOG natural language query system at their AAAI
Villa Capri hospitality suite August 7-8.  Vistors can also discuss
GMR projects in expert systems, natural language, computer vision,
robotics, etc.

                                        -- Ken Laws

------------------------------

Date: 16 Jul 84 18:35-PDT
From: mclure @ Sri-Unix.arpa
Subject: Delphi Experiment: move 2 please?

                            The Vote Tally
                            --------------
Folks, the moves are in and have been tallied.  The winner is: 1. ... c5.

A total of 21 votes were cast.  Originally there was a tie between 1.
...  c5 and 1.  ...  e5 with 7 votes each.  I cast the deciding vote in
favor of the Sicilian because machines play optimally in Classical
double-KP positions and we want to avoid positions the machine likes.

                          The Machine Moves
                          -----------------
The Prestige 8-ply replied 2. Nf3 from book in 0 seconds.

                Humans                    Move   # Votes
        BR BN BB BQ BK BB BN BR         1 ... c5     8
        BP BP ** BP BP BP BP BP         1 ... e5     7
        -- ** -- ** -- ** -- **         1 ... e6     2
        ** -- BP -- ** -- ** --         1 ... d5     1
        -- ** -- ** WP ** -- **         1 ... d6     1
        ** -- ** -- ** WN ** --         1 ... f5     1
        WP WP WP WP -- WP WP WP         1 ... Nc6    1
        WR WN WB WQ WK WB ** WR
             Prestige 8-ply

                           The Game So Far
                           ---------------
1. e4    c5
2. Nf3

    Your move, please?

        Replies to Arpanet: mclure@sri-unix or Usenet: sri-unix!mclure.
        DO NOT SEND REPLIES TO THE ENTIRE LIST! Just send them to one of
        the above addresses.

                               Addendum
                               --------
For readers who don't understand all of this, I am conducting a Delphi
experiment wherein a large network-based readership can send moves in
for a chess game.  Each reader's move is a vote that is combined with
other readers' votes.  The move with the most votes is played against
the Prestige chess machine searching a minimum of 8 full ply deep.  At
this level it is probably playing around the ELO 2200 level.  The results
will eventually be published in a journal along with an analysis of the
experiment.

[In view of the limited number of respondents, I shall have to discontinue
publishing the play-by-play in this digest.  Please contact mclure@sri-unix
if you wish to follow the game.  -- KIL]

------------------------------

End of AIList Digest
********************

∂21-Jul-84  1638	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #94
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Jul 84  16:33:37 PDT
Date: Sat 21 Jul 1984 15:11-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #94
To: AIList@SRI-AI


AIList Digest            Sunday, 22 Jul 1984       Volume 2 : Issue 94

Today's Topics:
  Expert Systems - INDUCE/PLANT Request,
  AI Tools - LOGO & Islisp/Franz Lisp Utility Functions,
  News Sources - NEXIS, NEWSNET,
  Conferences - AAAI-84 Registration,
  Intelligence - Metrics & Understanding & Evolution,
  Books - Structure and Interpretation of Computer Programs,
  Linguistics - Tense and Aspect,
  Seminars - Second-Order Polymorphic Lambda Calculus
    & Probabilistic Analysis of Hierarchical Planning Problems
----------------------------------------------------------------------

Date: 15 Jul 84 12:42:15-PDT (Sun)
From: ihnp4!alberta!calgary!masrani @ Ucb-Vax.arpa
Subject: request for information (INDUCE/PLANT expert system)
Article-I.D.: calgary.473

I would appreciate any information (references) on an expert system
called INDUCE/PLANT.  Apparantly, this system is able to deduce rules
about diagnosing soybean diseases from cases.  Thanks in advance.

Roy Masrani, University of Calgary.
..!alberta!calgary!masrani

------------------------------

Date: Thu, 19 Jul 84 20:53 EDT
From: Jill Smudski <Jill%upenn.csnet@csnet-relay.arpa>
Subject: LOGO implementation

I am looking for an implementation of LOGO to run on a Symbolics 3600 Lisp
Machine. Can anyone send me information about such a thing?  Also, any
pointers to current computer science (as opposed to educational) research
with LOGO would be appreciated.

    Thanks,
      Jill Smudski            mailing address:  University of Pennsylvania
                                                Moore School rm 66
                                                33rd and Walnut Sts
                                                Philadelphia, PA 19104

------------------------------

Date: 19 Jul 1984 11:38:31-EDT
From: Philip.Kasprzyk at CMU-RI-ISL2
Subject: Islisp/Franz Lisp Utility Functions

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

I am building a Lisp Utility Function Library for use in Islisp.
In addition to the built-in Islisp functions dones anyone out there
know of any existing libraries or have any private libraries that
they could contribute to the cause?

Documented functions would be nice but I can live with undocumented
code as long as the functionality of the code is obvious.

If you can help me please send mail to pmk@isl2.

The current status of this project is that I have slightly over 200 functions
which I am in the process of categorizing.

                           -Thanks
                          Phil Kasprzyk

------------------------------

Date: 18 Jul 84 8:18:46-PDT (Wed)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!gmf @ Ucb-Vax.arpa
Subject: Request for Franz Lisp info
Article-I.D.: uvacs.1375

I will be grateful for information on where to get a Franz Lisp
with its usual library written in C to be run on an Intel 8286
(new, compatible with 8086) running under Xenix ("small" UNIX).
There will be plenty of memory (hard disk).  Presumably any
Franz Lisp written in C which will run on a VAX 11/780 using
UNIX will be OK.

     Gordon Fisher
     c/o CS Dept
     Univ of Virginia
     Charlottesville, Va 22901

     ...!mcnc!ncsu!uvacs!gmf

------------------------------

Date: Sat 21 Jul 84 09:56:02-EDT
From: Wayne McGuire <MDC.WAYNE%MIT-OZ@MIT-MC.ARPA>
Subject: NEXIS, NEWSNET & AI

     I would like to know if anyone here has searched occasionally, or
searches regularly, NEXIS and/or NEWSNET (two commercial databases
which store the full text of many leading U.S. magazines and
newsletters). Has anyone found either database to be a particularly
useful source of information about developments in artificial
intelligence and related topics? Opinions, impressions, evaluations,
tips, gripes, etc. would be appreciated.

-- Wayne McGuire --

------------------------------

Date: Sat 21 Jul 84 10:55:04-PDT
From: AAAI-OFFICE <AAAI@SRI-AI.ARPA>
Subject: AAAI-84 Registration


 Hello!
 We've had an overwhelming response to the AAAI-84 Conference
 this year. We have room for 2,600 people for the Technical Program,
 and, at this time, we have about 50 seats left. If you expect
 to walk-in, please call Kathy Kelly (415-328-3123) to make
 some alternate arrangements.

 AAAI Office

------------------------------

Date: Fri, 20 Jul 84 00:28 EDT
From: Henry Lieberman <Henry%MIT-OZ@MIT-MC.ARPA>
Subject: Implicit Definition of Intelligence


I see the fallacy as being that the word "decision" in the article connotes
some willful decision of real significance to a human, such as
"Should I vote for Reagan or Mondale?". The article confuses it in the lay
reader's mind with the computer science sense of "decision", or primitive
conditional, like "If this location is zero, I'll skip to the next
instruction".  Obviously, one decision of a person in the former sense
may invovle zillions of primitive conditionals, so the human and
machine "speeds" are not directly comparable.

Why wait for the fifth generation?  The Lisp machine I'm using right now is much
smarter than a person, because a person can consciously consider only a few new
subgoals every second, whereas a Lisp machine can do a million function calls a
second.

------------------------------

Date: Thu, 19 Jul 84 09:52:55 PDT
From: Adolfo Di-Mare <dimare@UCLA-LOCUS.ARPA>
Subject: An interesting implicit definition of intelligence

The following is an even more intelligent program than the one described
in the Lure of AI. Any human being trying to figure it out will die blue
even for small values of n:

                n + 1                   m = 0
            /
AI(m,n) =  <    AI(m-1,1)               n = 0
            \
                AI(m-1,AI(m,n-1))       otherwise

        Adolfo
              ///

P.S.I  The A in the above definition stands for Artificial.  The I stands
       for Intelligence (it's easy when you know it).
P.S.II I couldn't come up with the Prolog version, which is far more
       intelligent.

------------------------------

Date: Thu 19 Jul 84 16:49:27-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Re: An interesting implicit definition of intelligence

           [In response to my personal query about the "AI"
             function in Adolfo DiMare's message. -- KIL]

Yes, that's the Ackermann function. The problem with it for benchmarking
is that it is hyperexponential, and so the intermmediate values soon
become too big to represent (even with bignums!) -- Fernando

------------------------------

Date: Thu, 19 Jul 84 02:26 PDT
From: Gloger.es@XEROX.ARPA
Subject: Re: The Turing Test - machines vs. people

Someone (I no longer have any record of who) apparently said:

>>  If a program passes a test in calculus the best we can grant
>>  it is that it can pass tests.  ...
>>  We make the same mistaken assumption about humans--that is
>>  that because you can pass a "test" you understand a subject.

To which Dave Seaman replied:

>  Suppose the program writes a Ph.D. dissertation and passes its
>  "orals"?  Then can we say it understands its field?  If not,
>  then how can we decide that anyone understands anything?

Implicit in the first quote is the answer to the second.  We cannot
(absolutely) decide that anyone understands anything, i.e. that
understanding exists, since "understanding" as used here is not a
scientific observable.  We can, if we wish, observe the observables,
like test passing.  And we can choose to infer from them the existence
of a causative agency for them, like "understanding" for test passing.
But this inference is true only to the extent that we can observe the
agency; and it is valid only to the extent that from it we can deduce
other, observably true and useful facts.

If you're willing for "understanding" to mean some observable thing,
like passing of some tests or other, then you can decide if someone
"understands" something, i.e. if "understanding" exists.  Otherwise, you
can't absolutely decide where, when, or how much understanding exists or
doesn't exist.

And ditto the entire preceding discussion with the buzzword
"understanding" replaced by "intelligence."  And again, replaced by
"luck."  And again, by "soul."  And again, by "god."

(Credit for the basis of much of my argument is due to Prof. Andrew J.
Galambos.)

Paul Gloger
<Gloger.es@Xerox.arpa>

------------------------------

Date: 19 Jul 84 11:42:10-PDT (Thu)
From: hplabs!hao!seismo!rochester!ritcv!ccivax!abh @ Ucb-Vax.arpa
Subject: Re: Brain and technology
Article-I.D.: ccivax.183

I beg to add that if the brain did not have advanced coordination
centers for speech and hand, thumbs would be as useful as
your big toes.
                                        Andrew Hudson
--
Christine, strawberry girl,
Christine, banana split lady.....
                         - Siouxsie & the Banshees

        ...[rlgvax | decvax | ucbvax!allegra]!rochester!ritcv!ccivax!abh

------------------------------

Date: 20 July 1984 16:51-EDT
From: Hal Abelson <HAL @ MIT-MC>
Subject: publication announcement


The book "Structure and Interpretation of Computer Programs" written
by Gerry Sussman, Julie Sussman, and me has just been published
(jointly by MIT Press and McGraw-Hill).

This book is based on the introductory programming course that we
teach at MIT.  All programming is done in the Scheme dialect of Lisp
(which nowadays is the entry-level programming language used at MIT).
We attempt to present an "AI/Lisp"-flavored introduction to the issues
of coping with the complexity of large software systems.  We hope that
this point of view can become an alternative to the Pascalitis that
has infected so much of computer science education.

Copies of the book should be available at the Lisp conference and at
AAAI..

------------------------------

Date: Thu 19 Jul 84 16:52:04-PDT
From: Bob Moore <BMOORE@SRI-AI.ARPA>
Subject: Tense and Aspect

  [Forwarded from the CSLI Newsletter by permission of the author.]

The difficulty of analyzing the semantics of tense and aspect in
natural language has been widely discussed in the past few years, but
I hadn't realized the extent to which this problem plagued medieval
scholars until I found this item in The Oxford Book of Oxford (Jan
Morris, ed.):

     Three Oxford academics were deputed to wait upon Henry III
     in 1266 to ask permission for a postern gate through the
     city wall at Oxford.  The King (in Latin) asked them what
     they wanted:

     First scholar: We ask the licence for the making of a gate
     through the city wall.

     Second scholar: No, we do not want the making of a gate, for
     that would mean the gate was always in the making and never
     made.  What we want is a gate made.

     Third scholar: No, we do not want a gate made, for a gate
     made must already be in existence somewhere else, and so we
     should be taking somebody else's gate.

     The King told them to go away and make up their minds.  When
     they returned in three days' time they had agreed on a
     formula:

     We ask permission that the making of a gate be made.
     [Ostium fieri in facto esse].

     Permission was granted."

------------------------------

Date: 19 July 1984 09:52-EDT
From: Arline H. Benford <AH @ MIT-MC>
Subject: Seminar - Second-Order Polymorphic Lambda Calculus

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                         DATE:  Tuesday, July 24, 1984
                         TIME:  1:45PM Refreshments
                                2:00PM Lecture
                        PLACE:  NE43-512A


                   "THE SEMANTICS OF EXPLICIT POLYMORPHISM"

                                 Kim B. Bruce
                      Department of Mathematical Sciences
                               Williams College
                               Williamstown, MA


Facilities for defining GENERIC or POLYMORPHIC routines, i.e., routines which
can have several typed instantiations, are now available in programming
languages such as Ada, Clu, and ML.  We consider a typed calculus called
second-order polymorphic lambda calculus as a theoretical model for
EXPLICIT polymorphism -- where types appear as parameters of procedures as in
Ada and Clu (and as opposed to ML).

The proof theory of the second-order polymorphic lambda calculus was reasonably
well understood before there was much concern over its semantics (as is common
in the development of logical systems).  The problem with assigning semantics
to the system is that terms may be applicable to their own types, which
involves a classical type violation.  Reynolds attempted to construct a kind of
set-theoretic model for the language but ran into difficulties, and
subsequently demonstrated that no such model is possible.  Donahue
attempted to construct a model using complete lattices and also failed.
Finally, McCracken and also Haynes successfully constructed models usng Scott
domains.

We describe in this talk a general notion of model for the second-order lambda
calculus.  In support of our definitions we establish soundness and
completeness results relative to our semantics for the previously given axiom
system for the calculus.  We also review related results and open problems.

HOST:  Professor Albert R. Meyer

------------------------------

Date: Fri, 20 Jul 84 10:33:05 PDT
From: Guy M. Lohman <lohman%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR.IBM-SJ@csnet-relay.arpa>
Subject: Seminar - Probabilistic Analysis of Hierarchical Planning Problems

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193


  Wed., July 25 Computer Science Seminar
  10:30 A.M.  PROBABILISTIC ANALYSIS OF HIERARCHICAL PLANNING PROBLEMS
  Aud. A      Multi-level decision problems can often be modeled as
            multi-stage stochastic programs.  Hierarchical
            planning systems designed for the solution of such
            problems can then be viewed as stochastic programming
            heuristics, and they can be subjected to the same
            kind of analytical performance analysis that has
            become customary in the area of combinatorial
            optimization.  We will give a general formulation of
            these multi-stage stochastic programs and sketch a
            framework for the design and analysis of heuristics
            for their solution.  The various ways to measure the
            performance of such heuristics are reviewed, and some
            relations between these measures are derived.  Our
            concepts are illustrated on a simple two-level
            planning problem of a general nature and on a more
            complicated two-level scheduling problem.  This talk
            is based on joint work with Alexander Rinnooy Kan and
            Leen Stougie.

            J. K. Leustra, Department of Computer Science,
            University of California at Berkeley
            Host:  B. Simons

            [...]

------------------------------

End of AIList Digest
********************

∂25-Jul-84  0101	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #95
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 Jul 84  01:01:22 PDT
Date: Tue 24 Jul 1984 23:44-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #95
To: AIList@SRI-AI


AIList Digest           Wednesday, 25 Jul 1984     Volume 2 : Issue 95

Today's Topics:
  Expert System - Ben-Zoma Mailing List & Plant/Induce reference,
  AI Tools - XLISP Sources,
  Parapsychology - Justification & Mailing List,
  Adminstrivia - Lab Reports and Project Descriptions,
  Seminar - Learning State Variables,
  Project - Engineer's Assistant for Fault Diagnosis
----------------------------------------------------------------------

Date: Mon, 23 Jul 84 15:00:46 pdt
From: Angela Shiflet <shiflet@lll-crg.ARPA>
Subject: Ben-Zoma Mailing List

                          RESEARCH ANNOUNCEMENT

Ben-Zoma is a knowledge- and experience-based scientific/engineering advisor
which converses in technical English.  It learns by abstracting
solution methods and language understanding derived from assisting users.
Requests, once converted to a frame-like form, are dispatched to a
distributed consortium of experts.  These sources of expertise (e.g. MACSYMA
on LISPM, LM on a Cray, SMP on a VAX, as well as new experts) are callable
through drivers written in LISP.  Ben-Zoma will create and dispatch code for
appropriate special-purpose processors (e.g. Crays, Cosmic Cubes, data flow
machines, and VAX arrays).  Graphical displays of numerical data will be
incorporated where appropriate.

Work on this project has begun under the direction of Dr. Ted Einwohner
at the Univeristy of California, Lawrence Livermore National Laboratory,
Computing Research Group, under contract to the Department of Energy.

Comments and suggestions are welcomed:

        ben-zoma-discussion@lll-crg.ARPA

To be added to this list send mail to:

        ben-zoma-discussion-request@lll-crg.ARPA

------------------------------

Date: Mon 23 Jul 84 14:06:09-PDT
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: Plant/Induce reference

Roy,
        The Plant/Induce work was done by Ryzard Michalski at the
University of Illinois, Urbana, Illinois. The paper I have is:

Michalski, R.S., and Chilausky, R.L.  Learning by being told and learning
from examples: an experimental comparison of the two methods of knowledge
acquisition in the context of developing an expert system for soybean
disease diagnosis. International Journal of Policy Anaysis and Information
Systems, Vol 4, No. 2, 1980.

        I believe he also published a version in 1981 in the International
Journal of Man-Machine Studies.

        If you are interested in systems that learn rules automatically,
you might also want to look at Peter Politakis' SEEK system described in
Artificial Intelligence, January 1984.

                                        Mike Walker
                                        walker@sumex-aim.arpa

------------------------------

Date: 20 Jul 84 8:28:06-PDT (Fri)
From: hplabs!hao!seismo!rlgvax!cvl!jcw @ Ucb-Vax.arpa
Subject: XLISP sources posted to net.sources
Article-I.D.: cvl.1193

After dozens of requests reached me, I decided to post the sources to
David Betz' XLISP interpreter.  It is written entirely in C, commented
uncommonly well, fairly portable, and has some rather neat features
including primitives for object-oriented programming.

The program is in the public domain, Copyright by David Betz.

Jay Weber
..!seismo!rlgvax!cvl!jcw
..!seismo!rochester!jay
jay@rochester.arpa

------------------------------

Date: 20 Jul 84 13:10:50-PDT (Fri)
From: hplabs!hpda!fortune!amd!decwrl!dec-rhea!dec-pbsvax!cooper@Ucb-Vax.arpa
Subject: Why discuss super- and para-normal phenomena
Article-I.D.: decwrl.2741


"Alex.Rudnicky" asks:

    "It may be fun to speculate about the super-normal and the para-normal,
    but what does it have to do with AI?"

Answer: A number of things.  First let's discuss the "super-normal", a phrase
which I will take to refer to "exceptional human performance."

1) Like it or not, we're stuck (at least for now) with describing the "I" in
"AI" in terms relative to human performance.  In most instances average human
performance is all that is required of our programs, but sometimes, the
exceptional is called for.  Human performance serves as a guide to what CAN
be done, because it HAS been done.  According to the standard assumptions of
AI, if humans can do it, so can sufficiently powerful, well-programmed
machines.  If humans cannot, then there may well be a NP-hard or worse problem
involved.  For example: can the type of associative memory retrieval associated
with human intelligence be merged effectively with total recall?  Or must the
information available by free-form association always be strictly limited?
If total well-indexed recall is done by at least one human being than it can
presumably also be done by a machine.  Otherwise, it remains an open question.

2) An understanding of human information processes is seen as either absolutely
necessary or (depending on what "school" of AI philosophy you subscribe to) at
least very useful to AI programming.  If your model of human information
processing cannot account for exceptional human performance then it is either
incorrect or incomplete.  Knowledge that some adults have "eidetic" memory
(near perfect image memory) may well be critical to understanding all memory.
Knowledge that a large percentage of children (perhaps all if we could test
them young enough) have eidetic memory and then lose it as they grow up,
should be taken into account in theories of information acquisition from a
near "tabla rasa" state.

In other words, knowing the limits of human information processing is, in the
long term, very important to the field.  In the short term, given our
distance from our ultimate goals, the need is less critical.  Some relatively
brief exchanges in an informal forum seems appropriate to keep people thinking
about it.

Which brings us to the paranormal.  First of all, claims of paranormal
abilities would seem to be included as exceptional human information processing
capabilities.  My previous comments about the "super-normal" applies.
Some effort (probably not much from the viewpoint of current AI) should be
applied to determine whether or not, in general, the phenomena exist and if so
whether they should be considered as an exceptional cognitive skill or only
an exceptional perceptual skill.  In the latter case its relevance is much
reduced. (My own opinion is: the experimental evidence makes it much more
likely than not that psi exists, and I would tend to see it as sensory/motor
rather than cognitive, though ESP seems to share many characteristics with
memory and dreaming).

Second, paranormal phenomena brings considerable doubt to the basic assumption
of AI: that human cognitive function can be explained as information processing
and therefore can be simulated or approximated by a sufficiently powerful and
well programmed artificial symbolic processor.  This is of minor pragmatic
concern if psi is simply a rarely used IO channel.  Several parapsychologists
have theorized, however, that psi functioning as perception is simply "leakage"
from its fundamental purpose in the organism; to wit, an essential part of
one or another cognitive function.  Candidate functions I have seen mentioned
are intuition, creativity and memory.  If so, (and I personally doubt it) then
human cognitive processing may not be simulatable on a Turing machine but only
on a Turing machine plus (you'll pardon the expression) oracle.

IN SUMMARY: while it seems premature to spend too much time now worrying about
exceptional (particularly paranormal) human performance,  the AI community
should remain aware of this area.  It might become very important to us and
we should not be caught unaware.

                                Topher Cooper

USENET: ...decvax!decwrl!dec-rhea!dec-pbsvax!cooper
ARPA: cooper%pbsvax.DEC@decwrl.ARPA

------------------------------

Date: 20 Jul 84 13:12:05-PDT (Fri)
From: hplabs!hpda!fortune!amd!decwrl!dec-rhea!dec-pbsvax!cooper@Ucb-Vax.arpa
Subject: Continuation of ESP discussion available.
Article-I.D.: decwrl.2742


On May 21 Ken Laws posted a reply summarizing an article from Dr. Dobb's
containing a theory of ESP.  I replied with a detailed criticism of the theory
(at least as summarized) and suggested that further contributions be mailed
to me rather than posted. I have put together a single file containing:

        1) A repeat of the original pair of articles.
        2) Some corrections/updates to my article.
        3) The five responses I got from my article.
        4) My replies to those five responses.

The compilation is 745 lines long.  Anyone who is interested in getting a
copy should send me mail requesting it.  Unless you request otherwise you
will also be added to a mailing list to receive the next round, if there is
one.  Your name and location will be kept confidential.  Any submissions for
the next round will be public unless you request that I remove your name and
location from the posting.

                                Topher Cooper

USENET: ...decvax!decwrl!dec-rhea!dec-pbsvax!cooper
ARPA: cooper%pbsvax.DEC@decwrl.ARPA

------------------------------

Date: Tue 24 Jul 84 23:17:30-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Lab Reports and Project Descriptions

Moderating the digest has become sufficiently routine that I
can devote increased time and creative energy to shaping the
contents.  I can thus accept submissions to new digest
"departments" for lab reports, project summaries, and abstracts
of recent or current work.  My intention is to better inform
readers by publishing "promotional" material originally written
for other audiences.  This is similar to AIList's circulation
of seminar abstracts, a feature that I consider highly successful.

I therefore encourage list members to send abstracts of their
technical reports, conference papers, and journal articles to
AIList.  Usenet members should preferably send such items directly
to AILIST@SRI-AI rather than through net.ai, although the usual
mechanisms will operate to prevent double distribution of net.ai
submissions.  I shall screen the items and publish them in
coherent groups as the digest load permits.  The digesting
delay for such material may be several weeks, but I shall try
to keep the backlog to a reasonable size by publishing special
issues of abstracts as necessary.

I shall also pass along a limited number of carefully edited
messages derived from Arpanet-distributed position postings and
similar material.  I shall take considerable liberties with the
arrangement and format of the original texts without inserting
[...] annotations, and shall suppress explicit solicitations
(although the unofficial custom on the Arpanet has been to permit
such commercialism by academic institutions).  I shall also try
to avoid repeating boilerplate lab descriptions that AIList has
already published.  Nonacademic institutions may [occasionally]
submit similar promotional material so long as Arpanet standards
are respected.  My decision to distribute such material will be
based solely on interest to the general AIList reader, not on the
potential benefit of filling AI-related positions.

Please don't dump all of your archived blurbs on me today
or tomorrow; we have plenty of time.  I should like to see
the submissions dribble in over a period of >>years<<, so
wait until an appropriate opportunity (e.g., when a related
discussion comes up in the digest or when your dissertation
goes to press).  Eventually we shall reach a steady state
with material being submitted as it is produced for other
purposes.

I anticipate that these news items will require more editing than
normal submissions, particularly the lab reports derived from
promotional material.  You can simplify my job if you provide a
meaningful "Subject:" line such as the "Seminar - ..."  headers I
have been distributing.  Keywords such as "Abstract" and
"Project" should be followed by a very short title that readers
can use to screen the messages.  The submissions themselves
should be concise and closely related to the interests of the
AIList readership.  (The enthusiasm of your colleagues, bosses,
and sponsors for your 200 papers on educational parapsychology
may not be shared by a general audience.)  Please include
sufficient "Contact:" information (e.g., address and phone number)
that I shall not have to help readers wanting further information.

I shall be fairly strict about screening material I consider
marginal, and should appreciate your consideration in minimizing
this unpleasant part of my responsibilities.  Rejections will be
handled by "form letter", and generally will not include detailed
justifications.  I hope that few will interpret such a notice
as an invitation to debate or the opening round in a series of
negotiations.

Comments to AIList-Request@SRI-AI on this new policy will be
helpful in determining whether this experiment should be modified
or discontinued.  (Your silence will be interpreted as lack of
disapproval.)  I shall keep list readers informed of any
significant trends in the expressed opinions.

                                        -- Dr. Kenneth I. Laws
                                           AIList Moderator

------------------------------

Date: Tue 24 Jul 84 12:36:44-PDT
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: Seminar - Learning State Variables

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


DATE:        Friday, July 27, 1984
LOCATION:    Chemistry Gazebo, between Physical & Organic Chemistry
TIME:        12:05

SPEAKER:     Tom Dietterich
             Heuristic Programming Project
             Stanford University

TOPIC:       Learning About Systems That Contain State Variables

It is difficult to  learn about systems  that contain state  variables
when  those  variables  are   not  directly  observable.   This   talk
formalizes this  learning problem  and presents  a method  called  the
iterative extension method for solving it.  In the iterative extension
method, the  learner  gradually constructs  a  partial theory  of  the
state-containing system.   At each  stage,  the learner  applies  this
partial theory to interpret the I/O behavior of the system and  obtain
additional constraints  on  the  structure and  values  of  its  state
variables.  These constraints  can be  applied to  extend the  partial
theory by  hypothesizing  additional internal  state  variables.   The
improved theory  can then  be applied  to interpret  more complex  I/O
behavior.  This process continues until a theory of the entire  system
is obtained.  Several  sufficient conditions for  the success of  this
method  will  be  presented   including  (a)  the  observability   and
decomposability of  the  state  information in  the  system,  (b)  the
learnability of individual  state transitions in  the system, (c)  the
ability of the learner to perform synthesis of straight-line  programs
and conjunctive predicates from  examples and (d)  the ability of  the
learner to perform theory-driven  data interpretation.  The method  is
being implemented and  applied to  the problem of  learning UNIX  file
system commands by observing a tutorial interaction with UNIX.

------------------------------

Date: Thu 12 Jul 84 04:33:14-PDT
From: MEISENSTADT@USC-ECLB.ARPA
Subject: Project - Engineer's Assistant for Fault Diagnosis


Human Cognition Research Laboratory, Open University, Milton Keynes, England

LOCATION: 50 miles north of London (between 38 and 60 minutes
          by train, depending upon the service).

COMPUTING FACILITIES:
   Symbolics 3600 Lisp Machine (for the dedicated use of this project), VAX
   11/750 running NIL, Prolog, and POP-11, and dedicated lines to the Open
   University's three DECsystem-20's running Interlisp, Maclisp, Edinburgh
   Prolog, etc. ALL TERMINALS IN OUR LAB ALSO HAVE DIRECT ARPANET ACCESS.

ACTIVE AI PERSONNEL: Two tenured staff members, (Marc Eisenstadt and Jon
   Slack), three research fellows, three Ph.D. students, and one consultant
   programmer, all of whom constitute the Human Cognition Research
   Laboratory's mainstream AI people.  The OU also has other active
   AI researchers on site, working under Max Bramer in the Maths Faculty
   and Tim O'Shea in the Institute of Educational Technology.  We are a
   vigorous and growing group of researchers, and our current manageable
   size enables us to offer the best AI computing facilities of any
   academic institution in Europe.

PROJECT: "A Knowledge Engineer's Assistant for Constructing
   Knowledge Based Fault Diagnosis Systems"

PROJECT SYNOPSIS: We are building a repertoire of rapid prototyping tools
   intended to speed up both the analysis of verbal protocols (such as
   those obtained during interviews with domain experts), and also the
   encoding of elicited knowledge into implementable form. The applied
   aspect of this work is the design of intelligent cross-referencing
   and browsing facilities linked directly to a coding window. The
   theoretical aspect of this work is an investigation of the process
   of theory-formation as typified by modern day Knowledge Engineers.

CONTACT: MEISENSTADT@USC-ECLB, or telephone (international) 011-44-908-653149
   or 011-44-908-661566.

------------------------------

End of AIList Digest
********************

∂26-Jul-84  1439	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #96
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Jul 84  14:38:07 PDT
Date: Thu 26 Jul 1984 13:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #96
To: AIList@SRI-AI


AIList Digest            Friday, 27 Jul 1984       Volume 2 : Issue 96

Today's Topics:
  AI Culture - Genealogy of AI,
  AI Literature - Robotics Directory,
  AI Languages - Lisp Speed Benchmarks,
  Humor - Naming Names,
  Review - Neuroanatomy and Electromagnetic Waves,
  Turing Test - Discussion
----------------------------------------------------------------------

Date: 23 Jul 84 15:41:57-PDT (Mon)
From: hplabs!hpda!fortune!amd!decwrl!flairvax!pfps @ Ucb-Vax.arpa
Subject: Genealogy of AI
Article-I.D.: flairvax.677

David S. Johnson of ATT Labs has gathered together a genealogy of theoretical
computer science along intellectual, rather than biological bloodlines.  That
is, the parent-child relationship has been replaced by the more significant
PhD advisor-advisee relationship. I have just seen a report that he produced
that contains 672 of these entries and several genealogical trees taken from
the data and I thought that it would be a nice idea to produce a similar
listing for Artificial Intelligence.  This data could show how AI has spread
from its initial centres to its current broad coverage.

One problem with such a genealogy is collecting and organizing all the data.
Therefore I am asking for anyone who wants to contribute data about AI
advisor-advisee relationships to send mail to me.  To make the organizing
process easier I would like all responders to follow the strict format
detailed below:

        1/ no ``bug killer'' lines
        2/ each entry is one line and should contain the following information
            a) advisee name with surname first
            b) advisor name with surname first
            c) institution where degree granted (in a short format)
            d) year in which degree granted (all four digits)
            e) type of degree (PhD, MSc, or other graduate degree)
            f) field of research (AI, Physics, Mathematics, etc.)
            f) area of research (natural language, expert systems, etc.)
            g) current affiliation of advisee (in a short format)
        3/ fields separated by # characters
        4/ unknown values indicated by ? characters
        5/ null values indicated by empty fields
        6/ all entries together at the beginning of the message and followed
           by a blank line

Here are three sample entries:

Cohen, Robin#Perrault, Ray#Toronto#1984#PhD#AI#natural language#Waterloo
Patel-Schneider, Peter F.#Mylopoulos, John#Toronto#1978#MSc#AI#knowledge
  representation#FLAIR
Patel-Schneider, Peter F.#McCalla, Gord#Toronto#1978#MSc#AI#knowledge
  representation#FLAIR

  [I had to break the longer lines for AIList. -- KIL]

When entering names try to use the name from the thesis unless the person's
name has since changed and the changed version is more well-known.  For
institution names try to use the shortest name in common use which is unique.
For example, use Toronto for University of Toronto, FLAIR for Fairchild
Laboratory for Artificial Intelligence, and other well-known short forms such
as Berkeley and MIT but NOT UofT for the University of Toronto or the
University of Texas.  Fields of research include AI, Computer Science,
Physics, Mathematics, Philosophy, Psychology, and Chemistry.  The idea here is
to find out the backgrounds of AI people which is why AI is separated from
Computer Science.  Areas of research are most important when the field is AI
and areas within AI include but are not limited to natural language, knowledge
representation, expert systems, theorem proving, learning, and vision.

A thesis advisor is that person officially recognized by the granting
University as the advisor.  If there is no such person then the advisor is
that person who guided the research for the thesis.  Such an advisor should be
indicated by a '*' character after the name.  If the official advisor did
nothing besides act as a signing agent and someone else really did all the
work then include two entries for the thesis, one listing the official advisor
and the other the unofficial advisor again with a trailing '*' character.
Also include two entries if there were two official advisors or two unofficial
advisors but please do not go beyond two advisors for one thesis.

To keep the amount of data within reasonable limits I am really only
interested in people who are in AI (preferably doing research) or who have
advised (directly or indirectly) someone in AI.  So if you are in AI the data
concerning you that I am interested in are your thesis advisor(s), their
advisors, and so on as far back as can be traced.  Of course, you can also
include other relevant data if you so wish.  If you know for certain that some
advisor has no advanced degree please include this.  I will assume that if
someone has only a master's degree listed then that person has no PhD.

I will collect all information sent to me and do as much error correction and
redundancy elimination as possible.  If enough responses are generated I will
send out periodic lists of the information generated, otherwise I will reply
only to the respondants.

Peter F. Patel-Schneider        {decvax!decwrl,hplabs}!flairvax!pfps

------------------------------

Date: 24 Jul 84 12:50:03-PDT (Tue)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!edison!rmk @ Ucb-Vax.arpa
Subject: Robotics Directory
Article-I.D.: edison.317

A few weeks ago someone posted a notice about the 1984 edition of the
International Robotics Industry Directory.  Would someone tell me
the publisher of this directory, or where it could be obtained?

                        Thanks much in advance,

                        Bob Kossey

{...houxm,...decvax!mcnc!ncsu!uvacs!iedl02}!edison!rmk  (804) 978-6378
GE - Industrial Electronics Development Lab             Charlottesville, Va.

------------------------------

Date: 25 Jul 1984 12:31:23-EDT
From: Philip.Kasprzyk at CMU-RI-ISL2
Subject: Lisp Speed Benchmarks

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

I am interested in determining the absolute speed of execution of a lisp
system. I realize that this is a complicated task that is application
and implementation dependant. Nevertheless I am seeking the following:

1) A "gibson mix" style benchmark for lisp based systems.

2) Any data that experienced users of lisp may have regarding performance.
    (I don't care what machine or dialect of lisp is reported)

3) Any method or program which translates a lisp program into another
    target high level language with execution speed as its objective.
        (I don't mean a normal lisp compiler)

Can anyone out there help me?

If so send mail to pmk@isl2.

------------------------------

Date: Mon 23 Jul 84 09:54:56-CDT
From: David Throop <LRC.Throop@UTEXAS-20.ARPA>
Subject: naming names

  I have started naming names.
  It started with  the observation  that since I  am David  Throop, and  David
Throop is my name, then  it follows that I am  my name.  There is obviously  a
flaw here; I am not my  name.  I mean, if I changed  my name I would still  be
myself.  So it is  perhaps more clear to  say that I am  David Throop, and  my
name is "David Throop".  (This is still off.  I mean, "David Throop" is  still
a character string, a sequence of letters, and my name is something other than
a sequence of letters.   "David Throop" has  a value that is  a name, and  the
name has a value that is a person, and I am that person.)  [Are you still with
me?]
  That is, to say of course, that D - a - v - i - d - - T - h - r - o - o - p
is still another character string that denotes a character string that denotes
a  name  that  denotes  me.   (The  string  "me"  denoting  the  same  person,
coincidently, as is denoted by my name.)
  Which is all  prologue to the  question of whether  I could give  my name  a
name.  Not  the name  "David Throop",  of course.   That's already  taken.   I
considered naming my name "david throop" but I felt that this might cause some
confusion.  (And raises the  ugly question of how  to pronounce it.  You  see,
the "h" silent  in my  last name,  and though  "Throop" is  pronounced with  a
silent "h" I'm not sure that "throop" would be also.)
  [Which brings up the side  issue of the version of  my name as a  pronounced
set of sounds.   And on  reflection, I'm  not sure  whether the  value of  the
character string "David Throop" is a sound sequence, or my name itself.  Or it
may be that the value of the  sound sequence is the character string.  Or  its
more likely that the sound sequence and the character string are two  separate
objects that happen to have the same value.  Although, curiously, you can  get
from one to the other  and back again without  ever encountering me.  I  mean,
even if you didn't know me, even if you didn't know that "David Throop" was  a
name, you could  pronounce it and  if you heard  it you could  spell it.   But
you'd probably have  a little  trouble with the  silent "h".   It persists  in
injecting itself into the whole problem.]
  But back to giving my name a  name.  People always say, "Well I'm not  about
to start  naming names",  and I  think the  forgoing illustrates  some of  the
problems away from which  people are shying.   But then, how  are we going  to
talk about my name if it is nameless?
  For instance, if I tell you that I don't want to sully my good name, and you
reply, "What good name?" how can I reply?  If I reply, "Why, David Throop,  of
course," then  I haven't  refered to  my  name, I've  refered to  myself.   Of
course, I could reply "Why, "David Throop", of course" but those little  quote
marks are kind  of hard  to see in  a spoken  retort, and that's  the kind  of
challenge I reply to immediately.  It wouldn't  do to have a letter show up  a
week later saying "Why, "David Throop", of course."  One needs to defend one's
name promptly.  Some people have a cute  way of waving their hands in the  air
in order to indicate those  marks, but it kind of  takes the force out of  the
retort, and I must remember that my good name is on the line.
  So I've decided to name  my name something else.   Although I saw some  good
ideas in a book  named "Your Baby's Name",  I steered clear.  "Jason"  sounded
nice, but somebody might  think that that was somebody's name.  And it's  not.
It's a name's name.  I decided on G00483; as near as I can tell it's not being
used for anything else right now.  And it sounds like a name's name.
  But this brings up a question.  Is G00483 my name's name?  Or is it just  my
name for my name? (my own name, that is.)  After all, my name doesn't have any
need for its name.   I'm the one that  needs to know its  name, so that I  can
refer to it when you question my good name.  Since it doesn't name itself  and
I do, I'll just leave it as my name for the name of myself.
  Look, I realize this is all rather complex and  I  don't want to run it into
the ground.  Just  understand.  I've started  naming names.  I've  got a  good
one.  And for now, I'm retaining custody.
                                                 Sincerely,
                                                  David Throop

------------------------------

Date: Sun 22 Jul 84 23:34:17-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Neuroanatomy and Electromagnetic Waves

The August issue of High Technology has an interesting article about
EEG potentials in the brain in response to different stimuli.  It points
out, for instance, that the spatial patterns of the potentials are
very different for a subject hearing the noun "rose" and another hearing
the verb "rows".  These patterns of oscillatory activity may be simply
a side effect of neural processing or a fundamental information transfer
mechanism.  (It is suggested that transfer of signals via electromagnetic
radiation may be faster than broadcasting via neural interconnections,
but I find that hard to believe.  It is also suggested that the resonant
coupling of neural circuits may be a robust transmission mechanism in
an organ that is continually rearranging synapses and even losing neurons
[at about 50,000 per day, I think].)  The temporal frequency spectra of
these patterns are also presented as fuzzy hash functions possibly
responsible for associative memory.

A box accompanying the article discusses the Boltzmann Machine, an
architecture based on neural models.  Scott Fahlman (of NETL fame) and
Geoffrey Hinton are quoted.  (Scott mentioned this work in an AIList
issue last year.)  The Boltzmann machine apparently has stochastic
behavior even for deterministic inputs; this simplifies stochastic
analyses of its behavior.

                                        -- Ken Laws

------------------------------

Date: 22 Jul 84 14:12:05-PDT (Sun)
From: hplabs!ames!eugene @ Ucb-Vax.arpa
Subject: Should The Turing test be modified with the times?
Article-I.D.: ames.427

I am not an AI expert, but I do know image processing and certain other
computationally intensive tasks which are 'easy' for humans.  I also know
the original definition of the Turing test.  I take issue
with statements that LISP machines are 'smarter' or 'better' than humans
for "subgoal" tasks.  What I am wondering is "should the Test be modified
to Our times?"

I recall that Turing specified that a communication link such as a tty or
phone could be used [1930s].  Should this be changed to a video link?
[This is am example only, there might be other aspects.]
Should the testee 'see' images?  Can machines recognize defocused images
of an animal and say "That is a cow" as humans could [to a limit].

Perhaps our definition of 'human' constitutes a moving target which
might make the Test more difficult.  The processing requirements of the
Turing Test in the 1930s would be less than those of today.  I can see it now:
over a crude link, we discover that we cannot tell the difference between
man and machine, then we hook up a video link and the difference 'becomes
apparent.'  Admittedly, one can argue that this is only a matter of adding
more processig power, but ignore that argument for a while.  Also, there might
be audio examples (perhaps not as powerful as the video example).

Comments?  This is for discussion purposes, not just me.

--eugene miya
  NASA Ames Research Center

------------------------------

Date: 22 Jul 84 10:15:25-PDT (Sun)
From: decvax!mit-athena!yba @ Ucb-Vax.arpa
Subject: Re: Re: The Turing Test - machines vs. p - (nf)
Article-I.D.: mit-athe.213

<Dave Seaman, re: phd thesis and orals passing>

You are trivializing the point, not understanding it.  The test of whether
an engineer understands engineering is that he can design and build things
that work.  I have met Ph.D.s who can write "learned" papers but cannot
"do" anything concrete.

If you must have a test, I'll assert that someone who can apply knowledge
of a field to a new area, and can transmit that knowledge to another who
was previously ignorant of said knowledge such that that person can apply
the knowledge in the same way, has understanding (yes, it's recursive).

Measurement of AI performance is important.  It is notions like "test"
that cause people to confuse "production systems" with "expert systems".
You may recall that the original notion of "expert system" was "a system
that solves problems the way (human) experts do".  This has been reduced
to rules-based production systems in many people's minds, because they
think that experts solve problems by applying rules.  I am not satisfied
that this is true; the question several people have asked is "does a
rules based system demonstrate an expert's intuition?"  After all, it passes
the "test" of applying knowledge to a problem.  You can substitute
"judgement", "intelligence", "understanding", or "talent" for the word
intuition if you like.

The question becomes rather concrete when you decide whether to allow
a program to practice medicine.  We have all heard of examples of
accredited (human) doctors who have not been able to safely practice
medicine although they passed all the qualifying "tests".

In fact, there seems to be a great body of technique floating around in
many disciplines; there is also a great lack of those who know what the
limits of application of those techniques are (usually because they
know the underlying assumptions and constraints).  I greatly fear
people who have become so proficient at using hammers that every problem
begins to resemble a nail.

I will also assert that you read my previous letter, processed the information,
responded, and did all this without understanding what I meant.  Now if we
assume you disagree, what test would you design to see which of us was
correct?  (Warning: this is a hard problem).

--
yba%mit-heracles@mit-mc.ARPA            UUCP:   decvax!mit-athena!yba

------------------------------

Date: 24 Jul 84 9:46:04-PDT (Tue)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.ags @ Ucb-Vax.arpa
Subject: Re: Re: The Turing Test - machines vs. p - (nf)
Article-I.D.: pucc-i.374

Reply to Mark Levine:

The original example was of a program which passes calculus tests.  You
objected that such a program has not really demonstrated understanding.
I agreed with your point before, and though I did not explicitly say so,
I thought it was implicit in the fact that I did not express disagreement.

I then suggested a different test:  suppose a program writes a Ph.D.
dissertation and passes its "orals."  I didn't mention any specific field,
but I was thinking of mathematics, where a Ph.D. dissertation involves the
development of new mathematics.  I then asked whether this program has
demonstrated understanding of its field.

>  If you must have a test, I'll assert that someone who can apply knowledge
>  of a field to a new area, and can transmit that knowledge to another who
>  was previously ignorant of said knowledge such that that person can apply
>  the knowledge in the same way, has understanding (yes, it's recursive).

I submit that the test I suggested meets the first half of your criterion.
You have added a new point here which I overlooked:  the ability to transmit
knowledge should also be considered part of the test.  I don't believe this
part should be weighted as heavily, however, since the best doers are not
necessarily the best teachers.  My objective was not so much to establish
the definitive test but to explore the question of whether a computer can
demonstrate understanding of a particular field (which is closely related
to the question of whether an adequate test can be devised).  I don't
understand why you think this is "trivializing the point."  You admitted
yourself that "measurement of AI performance is important."

>I will also assert that you read my previous letter, processed the information,
>responded, and did all this without understanding what I meant.  Now if we
>assume you disagree, what test would you design to see which of us was
>correct?  (Warning: this is a hard problem).

I really don't understand why you are being so defensive.  I agreed with your
original point and I have already said so.  In my own previous posting I did
not state any opinions;  I merely posed a question.  My objective was
enlightenment.  I am sorry you interpreted this as an attack on your position.
--

Dave Seaman                     My hovercraft is no longer full of
..!pur-ee!pucc-i:ags            eels (thanks to my confused cat).

------------------------------

End of AIList Digest
********************

∂27-Jul-84  2351	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #97
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 Jul 84  23:50:22 PDT
Date: Fri 27 Jul 1984 22:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #97
To: AIList@SRI-AI


AIList Digest           Saturday, 28 Jul 1984      Volume 2 : Issue 97

Today's Topics:
  LISP - Pascal-Based Interpreter,
  AI Culture - Geneology and Citation Linkages,
  Humor - Naming Names & COME-FROM & Chaostron & Sex,
  Jargon - Teleolgy and Teleonomy,
  Philosophy - Mind and Body,
  Intelligence - Turing Test & Understanding,
  Seminar - LISP Debugger,
  Workshop - Hardware Design Verification
----------------------------------------------------------------------

Date: 27 Jul 1984 14:49:59 EDT
From: Richard F. Hartung <HARTUNG@USC-ISI.ARPA>
Subject: LISP interpreter

I have a LISP interpreter written in PASCAL by Chris Meyers and myself.
It has about 90 functions and is approximately MACLISP in dialect.
It does its own garbage collection and is written in standard PASCAL.
I currently have it running on an HP-1000 and a VAX under VMS. It has
also been used with a Honeywell 6000.  It can easily be cut down in size
to run on small systems and is also easily expandable.  If you would like a
copy write to me on the net at:  HARTUNG@USC-ISI.ARPA or write to:
Dr. Michael A. Moran
Lockheed Missles and Space Co.
Advanced Software Laboratory
O/92-10 B/255
3170 Porter Drive
Palo Alto, CA 94304

------------------------------

Date: Fri, 27 Jul 1984 12:11:35 EDT
From: Macintosh Devaluation Manager <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Geneology & Naming Names

1.  The notion of studying the history of any subject via its intellectual
linkages is hardly a new one.  Advisor-advisee connections are important, but
an equally relevant approach is via citation-tracing -- looking at who has
quoted whom, and in what context.  The best tool for this type of work is
the Science Citation Index (from ISI).  Here, you can look up any given
article and find out who has referenced it during the past 12 months.  With
a bit of patience one can do a great deal of tracing by switching back and
forth between the index and various articles.

2.  David Throop's name problem was, as I recall, proposed in a more enjoyable
form by Lewis Carroll, in the scene where the White Knight offers to sing a
song to Alice.  We learn not only what the song is, but what its name is, and
what both the song and its name are called.

(I think Hofstadter carries this even further in Goedel, Escher, Bach, too...)

------------------------------

Date: Sun 22 Jul 84 23:49:49-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: CACM Articles

The July CACM has a few items that may be of interest to AIers.
The first is a letter to the editor from Rellim C. Drahcir pointing
out the relevance of Clark's COME-FROM statement to AI.  (COME-FROM
is an alternative to the GOTO.  Drahcir claims that COME-FROM simplifies
proof procedures: "It can be shown that an arbitrary starting point can
be utilized for any program, given a clear statement of its terminus.
Thus we have a computational analog of the long-sought and very
elusive 'solve problem' computer instruction.")

Another letter, from Vic Vyssotsky, explains the origins of the
famous (phony) BTL TM on the Chaostron learning system.  The
Chaostron memo was reprinted in the April CACM.

The journal also contains news notes on a Zurich workshop on AI
in economics and management and a Kansas City symposium on the role
of AI in command and control.

                                        -- Ken Laws

------------------------------

Date: Thu 26 Jul 84 09:38:10-CDT
From: David Throop <LRC.Throop@UTEXAS-20.ARPA>
Subject: infinite sexual partners

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]


  "Due to the increase in  the number of herpes  cases reported, the staff  at
the Student Health Center  suggests that people limit  themselves to a  finite
number of sexual partners."  -Daily Texan, September '83

  We decided at the time that two monuments were in order.  One is for the guy
that has had a denumerably infinite number of sexual partners.  But the other,
still bigger  monument would  be for  the guy  that has  had a  nondenumerably
infinite number of sexual partners.
  But we were curious.  Who are these guys? And where do they get off?
  The guy that has had a  denumerably infinite number of partners is  obvious.
He's the guy that slept  with everyone now living, who  has ever lived or  who
ever will live.  But the other guy?
  Our investigations show that this guy is into group sex.  He's the one  that
has slept with the power set of everyone now living, who has ever lived or who
ever will live.
  But that leaves  us with  an unanswered  question.  Consider  a woman  named
Polly who has slept with both these guys.  The first guy has slept with Polly,
while the other guy has, strictly, slept with the singlton set containing only
Polly.
  Which should be more satisfying?

------------------------------

Date: 25 Jul 84 8:29:49-PDT (Wed)
From: ihnp4!whuxle!pez @ Ucb-Vax.arpa
Subject: re: AI-speak ?? (from SAIL's BBOARD)
Article-I.D.: whuxle.539

    Please note that not only has biology exiled teleology
    but they have replaced it with teleonomy, meaning
    purposefulness in name only, that is to help in our
    understanding rather than to explain.  See Konrad
    Lorenz Introduction to Ethology.

                Paul Zeldin.

------------------------------

Date: 22 Jul 84 23:24:26-PDT (Sun)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!edison!jso@Ucb-Vax.arpa
Subject: Re: more on atheism (+ cross-over to "mind" discussion)
Article-I.D.: edison.314

> I know not what Buddha says, but as for Descartes, cogito ergo sum!

> Yes, Descartes believed the real world could be proved to exist, and
> his famous propostion is but his first step: he proved he existed.

> Please be kinder to Rene next time. He would not rest well if he
> thought his method could be generally perceived to state the opposite
> of what he meant.
>                                       David Rubin
>                       {allegra|astrovax|princeton}!fisher!david


He *believed* that the real world could be proved to exist,
but he certainly didn't prove it logically.  It's been a while since
I read his "proof", but I seem to remember something like this:
He proves he exists, as a thinking entity, because the one thing
he can't deny is that he thinks.  He also experiences the external
world through his senses; this can either be real, he decides, or
the action of a "deceiving demon" (maya, illusion).  Fine so far.
How does he then "prove" that the outside world is real as opposed to
some "deception"?  Because God is Good.  He proves this quite logically,
he simply has some very questionable axioms...

This is similar to his thoughts on mind-body dualism.  He reached the
conclusion that the mind (soul?) and body were of separate substances,
and therefore could not interact.  But of course they did, and faced
with a nice, rational conclusion, and "facts" that disagreed with it,
he of course retained his conclusion, giving as explanation that
the mind and body couldn't interact, except in the pineal gland. [Huh?]
Kind of suggests that there's something wrong with mind-body dualism.
[Interesting how these netnews discussions cross-fertilize.
To net.ai'ers: Note that this says nothing against the existence
of the mind, but indicates that maybe there is no real duality,
(the universe is one...), or maybe no real body (hmm...)]

John Owens
...!{ {duke mcnc}!ncsu!uvacs houxm brl-bmd scgvaxd }!edison!jso

------------------------------

Date: 31 Jul 84 2:44:54-EDT (Tue)
From: hplabs!tektronix!uw-beaver!cornell!vax135!ukc!west44!gurr@Ucb-Vax.arpa
Subject: Re: Should The Turing test be modified with the times?
Article-I.D.: west44.276


I think that we're all missing something here - the Turing test was not
designed to test how like a human a machine could be, but to test whether
or not a machine could appear to think. Adding such facilities to the test
such as a video link merely makes the test into an imitation game. This
is not what the test was designed for.

Personally, I think the test is totally inconclusive and irrelevant. It gives
merely a subjective qualitative answer to a question which we cannot answer
satisfactorily about other people, or even about ourselves (from some of the
items on USENET, I'm sure some people don't think :-) !!!).

                                         mcvax
        "Hello shoes. I'm sorry               \
        but I'm going to have to                ukc!west44!gurr
        stand in you again!"                  /
                                        vax135

        Dave Gurr, Westfield College, Univ. of London, England.

------------------------------

Date: 27 Jul 84 08:42:24 PDT (Friday)
From: Hoffman.es@XEROX.ARPA
Subject: Re: Ph.D. and 'understanding'

From H. E. Booker in a piece in "Science" magazine (maybe around summer 1973):

"At the conclusion of an ideal undergraduate education, a man's brain
works well.  He is convinced, not that he knows everything or even that
he knows everything in a particular field, but that he stands a
reasonable chance of understanding anything that someone else has
already understood.  Any subject that he can look up in a book he feels
that he too can probably understand.  On the other hand, if he cannot
look it up in a book, he is uncertain what to do next.  This is where
graduate education comes in.  Unlike the recipient of a Bachelor's Degree,
the recipient of a Doctor's Degree should have a reasonable confidence in
his ability to face what is novel and to continue doing so throughout life."

--Rodney Hoffman

------------------------------

Date: Fri, 27 Jul 1984  17:34 EDT
From: HENRY%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - LISP Debugger

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Steps Toward Better Debugging Tools For Lisp

Henry Lieberman

Thursday, 2 August 1984, 3 PM
7th floor playroom, 545 Technology Square

Although contemporary Lisp systems are renown for their excellent debugging
facilities, better debugging tools are still urgently needed.  A basic flaw
with the tools found in most implementations is that they are oriented
towards inspection of specific pieces of program or data, and they offer
little help in the process of localizing bugs within a large body of code.
Among conventional tools, a stepper is the best aid for visualizing the
operation of a procedure in such a way that a bug can be found without prior
knowledge of its location.  But steppers have not been popular, largely
because they are often too verbose and difficult to control.

We present a new stepper for Lisp, Zstep, which integrates a stepper with a
real-time full-screen text editor to display programs and data.  Zstep
presents evaluation of a Lisp expression by visually replacing the expression
by its value, conforming to an intuitive model of evaluation as a
substitution process.  The control structure of Zstep allows a user to "zoom
in" on a bug, examining the program first at a very coarse level of detail,
then at increasingly finer levels until the bug is located.  Zstep keeps a
history of evaluations, and can be run either forward or backward.  Zstep
borrows several techniques from the author's example-oriented programming
environment, Tinker, including a novel approach to handling error conditions.

A videotaped demonstration of Zstep will be shown.

------------------------------

Date: Wed 25 Jul 84 18:38:01-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Workshop - Hardware Design Verification

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

                WORKSHOP ON HARDWARE DESIGN VERIFICATION

The IFIP Working Groups 10.2 and 10.5. Program have issued a call for papers
to be delivered at a workshop to be held on November 26 and 27, 1984, in
Technical University of Darmstadt, F.R. Germany. The workshop is on hardware
design verification and will cover all aspects of verification methods for
hardware systems, including temporal logic, language issues, and application
of AI techniques, as well as other areas.

The workshop committee is chaired by Hans Eveking, Institut fuer
Datentechnik, Technical University of Darmstadt, D-6100 Darmstadt, Fed. Rep.
Germany, (49) (6151) 162075, and includes Stephen Crocker, Aerospace
Corporation, P.O. Box 92957, Los Angeles, California 90009.

------------------------------

End of AIList Digest
********************

∂01-Aug-84  1020	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #98
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 Aug 84  10:20:35 PDT
Date: Wed  1 Aug 1984 09:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #98
To: AIList@SRI-AI


AIList Digest           Wednesday, 1 Aug 1984      Volume 2 : Issue 98

Today's Topics:
  Expert Systems - Archaeology and PROSPECTOR,
  Image Processing - Request for Algorithms,
  Logic Programming - Public-Domain Theorem Provers,
  AI Languages - Frame-Based Languages,
  AI Hardware - Facom Alpha,
  LISP - Georgia Tech Lisp & Aztec C & Franz on P-E 3230,
  Seminar - Nonmonotonic Reasoning Using Dempster's Rule
----------------------------------------------------------------------

Date: 26 Jul 84 9:09:00-PDT (Thu)
From: pur-ee!uiucdcs!uiucuxc!chandra @ Ucb-Vax.arpa
Subject: Req: Info on Archaeological Expert - (nf)
Article-I.D.: uiucuxc.28900002

Help!!!
        I am a Graduate student trying to build an Archeologist's
assistant. This program is supposed to contain knowledge about human
habitation patterns, anthroplogical aspects etc.

        This note is a request for any info on the application of a
knowledge based programs to archeological surveying. I faintly remember
having seen a reference on this topic long ago.

        I am currently thinking of using some of the ideas used in
PROSPECTOR.

        Any Ideas, Comments, cues?

                                        Thanks,
                                        Navin Chandra

        (outside Illinois)      Phone 1-800-872-2375 (extention 413)
        (in Illinois)           Phone 1-800-252-7122 (extention 413)

------------------------------

Date: Sun 29 Jul 84 10:34:30-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Archaeology and PROSPECTOR

PROSPECTOR is a pretty fair hierarchical inference system, but
be advised that it provides no spatial reasoning mechanisms.  The
basic consultation mode asks questions about geologic conditions
at a single position or "site".  In map mode, it uses map data
to provide the same information independently for every point on
the map -- there is no spatial analysis or carry-over from one
point to the next.  You can add decisions based on criteria such
as being "near a fault", but the reasoning mechanisms have no
way of determining "nearness" automatically unless you provide
a "nearness map"; neither can they reason about one site being
nearer than its neighbors.  These deficiencies could be fixed, but
the existing PROSPECTOR is not a spatial reasoning system.

                                        -- Ken Laws

------------------------------

Date: 27 Jul 84 11:44:30-PDT (Fri)
From: ihnp4!drutx!zir @ Ucb-Vax.arpa
Subject: image processing
Article-I.D.: drutx.763


     I am trying to track down source code for image
     processing routines, such as digital filters, noise
     filters, dithering filters, shape recognition and
     visual database design. Any and all responses will be
     appreciated. I will post results in Net.sources if there
     is sufficient interest.

         Thanks for your time,
         Mark Zirinsky
         AT&TIS, Denver 31d48
         (303)  538- 1063

------------------------------

Date: 30 Jul 1984 13:36-PDT
From: dietz%USC-CSE@USC-ECL.ARPA
Subject: Public Domain Theorem Provers


I'm trying to find out what's available in the public domain in the way
of theorem proving programs and subroutine packages.  If you have such
please send a note to:

                Paul Dietz
                dietz%usc-cse@usc-ecl

------------------------------

Date: Sun, 29 Jul 84 22:01 EDT
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: Frame-Based Languages


I am investigating some implementation techniques for frame-base
representation languages with inheritance.  Most such languages do
inheritance at "access time" and may or may not keep a local copy of the
inherited data.  I am trying to determine which languages and/or
implementations of languages have instead done the inheritance at
"definition time" by making some kind of explicit local copy or pointer to
the inherited information.  I am particularly interested in finding out if
any languages have done this in a general way that would allow changes in
the attributes of a generic object to be properly inherited by its current
descendants.

Tim

------------------------------

Date: Mon 30 Jul 84 18:23:17-EDT
From: Wayne McGuire <MDC.WAYNE%MIT-OZ@MIT-MC.ARPA>
Subject: Facom Alpha

     MIS Week for 8/1/84 (p. 18) reports the following:

     ''Fujitsu Ltd. last week announced shipment next March of Japan's
first Lisp machine, named Facom Alpha, claiming it is four times
faster than the Symbolics 3600 in executing artificial intelligence
programs such as expert systems.

     ''The Alpha, carrying a price tag of $90,930, was said to be a
back-end processor connectable with a Fujitsu mainframe or the
company's S-3000 super minicomputer. It runs 'Utilisp,' a local
version of Lisp language developed by Tokyo University.''

     What catches one's eye is the claim that the Facom Alpha is four
times faster than the Symbolics 3600.

     Reading the popular computer press these days could easily give
one the impression that Japan is about to trounce the U.S. in the
development of both supercomputers and AI systems. Does anyone on
AIList know whether this claim about the Facom Alpha's speed has any
grounding in reality?

-- Wayne McGuire --

------------------------------

Date: Mon 30 Jul 84 17:34:23-CDT
From: CMP.BARC@UTEXAS-20.ARPA
Subject: Yet Another Lisp Dialect?

I recently received a rather indirect inquiry concerning a Lisp dialect called
"Georgia Tech Lisp".  Could anyone out there provide or direct me to some
information about this variant and its idiosyncrasies?

Dallas Webster (CMP.BARC@UTexas-20)

------------------------------

Date: 31 Jul 84 10:09:44-PDT (Tue)
From: hplabs!pesnta!lpi3230!steve @ Ucb-Vax.arpa
Subject: Franz Lisp running on Perkin Elmer 3230 Unix
Article-I.D.: lpi3230.142


Franz Lisp (Opus 38.79) is now running on a Perkin Elmer 3230 under
their version 2.4 Unix (a V7 version).  Soon after PE delivers their
promised System 5.2, it will be ported to that system.  For the many
of you who have never heard of Perkin Elmer, They used to be called
Interdata and an Interdata machine was the first machine to which
Unix was ported in the mid seventies.  The 3230 has about 90% of the
speed of a VAX-780 for the price of a 750.

For the few of you who actually HAVE a PE machine and want to use Franz
Lisp, send me mail.  We haven't yet decided under what terms to make it
available.  The port was too time consuming and expensive to just give
it away, but we aren't in business and do not have the manpower to really
market and support it.  Maybe PE will distribute it on a third party
basis at a reasonable cost.

                                        Steve Burbeck
                                        Linus Pauling Institute
                                        440 Page Mill Road
                                        Palo Alto, CA 94306  (415)327-4064
                                        hplabs!{analog,pesnta}!lpi3230!steve

------------------------------

Date: 28 Jul 1984 2132-CDT
From: Usadacs at STL-HOST1.ARPA
Subject: LISP in Aztec C, Public Domain

  Ref: AI Digest, V2 #90 "LISP in Aztec C", is avaliable from
SIMTEL20 via FTP. MICRO:<SIGM.VOL118>

A.C. McIntosh, USADACS@STL-HOST1.

------------------------------

Date: Mon 30 Jul 84 15:14:35-PDT
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: Seminar - Nonmonotonic Reasoning Using Dempster's Rule

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


DATE:        Friday, August 3, 1984
LOCATION:    Chemistry Gazebo, between Physical & Organic Chemistry
TIME:        12:05

SPEAKER:     Matt Ginsberg
             Heuristic Programming Project
             Stanford University

TOPIC:       Non-monotonic Reasoning Using Dempster's Rule

Rich's suggestion that the arcs of  semantic nets be labeled so as  to
reflect confidence in the properties they represent is investigated in
greater detail.   If these  confidences are  thought of  as ranges  of
acceptable probabilities,  existing statistical  methods can  be  used
effectively to combine them.  The framework developed also seems to be
a natural one in which to describe higher levels of deduction, such as
"reasoning about reasoning".

------------------------------

End of AIList Digest
********************

∂02-Aug-84  1213	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #99
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 Aug 84  12:13:39 PDT
Date: Thu  2 Aug 1984 10:54-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #99
To: AIList@SRI-AI


AIList Digest            Thursday, 2 Aug 1984      Volume 2 : Issue 99

Today's Topics:
  AI Funding - Call for Questions,
  LISP - IBM 4341 Implementation?,
  Applications - Design and Test,
  Journal - Symbolic Computation,
  Book - Successful Dissertations and Theses by David Madsen,
  Intelligence - Turing Test & Understanding,
  Software Validation - Expert Systems,
  Seminar - Speech Recognition Using Lexical Information
----------------------------------------------------------------------

Date: 1 Aug 84 09:54 PDT
From: stefik.pa@XEROX.ARPA
Subject: Call for Questions:  AAAI panel on SC

DARPA's Strategic Computing initiative is going to be a major source of
funding for AI research (as well as other Computer Science research) in
the next several years.   The project has been hailed as "just in time"
by people concerned with the levels and directions of funding for
research in Computer Science.  It has also attracted the criticism of
those who are worried about the effect of military goals on funding, or
about dangers of trying to guide research too much.

        Next Friday morning at  the AAAI  conference in Austin, there will be a
panel session during which several members of the DARPA staff will
present goals, ideas, and scales of this program.   The presentation
will be followed by a question and answer period with me as moderator.
Some of the questions will come "live" from the audience.

        Because the SC project will effect our research community in many ways,
I would like to make sure that the questions address a broad enough
range of issues.  To this end I am now soliciting questions from the
community.  I will select a sampling of "sent-in" questions to try to
provide a balance across issues of concern to the community -- anything
from funding levels, to research objectives, to 5th generation
comparisons, to the pace of the research, to expectations by the
military, to statements that have appeared in the press, etc.

        Please send questions to me -- Stefik@Xerox.Arpa.  Keep them short.  I
don't want to wade through long paragraphs in search of a coherent
question.  Think of questions that could fit easily on a 35 mm slide --
say 25 words or so.  I expect to choose from these sent-in questions for
about half of the Q/A period.

Mark

------------------------------

Date: 30 Jul 84 9:50:07-PDT (Mon)
From: ihnp4!mhuxl!ulysses!unc!mcnc!philabs!cmcl2!lanl-a!cib @ Ucb-Vax.arpa
Subject: Query - LISP for IBM 4341?
Article-I.D.: lanl-a.11272

I would be very grateful for information on LISP dialects
for the IBM 4341, and sources thereof.

Thank you.

------------------------------

Date: Thu 2 Aug 84 10:43:10-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: IEEE Design & Test

The July issue of IEEE Computer Graphics mentions that IEEE Design & Test
of Computers is seeking submissions for a special August 1985 issue on
artificial intelligence techniques in design and test.  They particularly
solicit material on AI in design automation, CAD, and CAT, and on
expert systems, automatic design systems, test generation and system
diagnosis, natural-language CAD interfaces, and special-purpose hardware
to support AI systems.

Submit four copies by December 1 to Guest Editor Donald E. Thomas,
ECE Department, Carnegie-Mellon University, Pittsburgh, PA  15213,
(412) 578-3545.

                                        -- Ken Laws

------------------------------

Date: Thu 12 Jul 84 13:49:29-CDT
From: Bob Boyer <CL.BOYER@UTEXAS-20.ARPA>
Subject: New Journal/Call for Papers

The Journal of Symbolic Computation (published by Academic Press, London) will
publish original articles on all aspects of the algorithmic treatment of
symbolic objects (terms, formulae, programs, algebraic and geometrical
objects).  The emphasis will be on the mathematical foundation, correctness and
complexity of new sequential and parallel algorithms for symbolic computation.
However, the description of working software systems for symbolic computation
and of general new design principles for symbolic software systems and
applications of such systems for advanced problem solving are also within the
scope of the journal.

Manuscripts should be sent in triplicate to:

   B. Buchberger, Editor
   Journal of Symbolic Computation
   Johannes-Kepler-Universitat
   A4040 Linz, Austria

Associate Editors:  W. Bibel, J. Cannon, B. F. Caviness, J.  H. Davenport, K.
Fuchi, G. Huet, R. Loos, Z. Manna, J.Nievergelt, D. Yun.

------------------------------

Date: Wed 1 Aug 84 09:50:42-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Successful Dissertations and Theses by David Madsen

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Successful Dissertations and Theses; a guide to graduate student research
from proposal to completion by David Madsen  LB2369.M32 1983 c.3, is
currently on the New Books Shelf in the Math/CS Library.  HL

------------------------------

Date: 25 Jul 84 9:54:00-PDT (Wed)
From: pur-ee!uiucdcs!ea!mwm @ Ucb-Vax.arpa
Subject: Re: Should The Turing test be modified w - (nf)
Article-I.D.: ea.500002


>What I am wondering is "should the Test be modified
>to Our times?"

I don't think so; at least not with the video link you mentioned.  A key
element in the Turing Imitation game was that it hid the handicaps suffered
by the computer, leaving only the (possible) intelligence exposed. If you
could modify it without subtracting that property, then I'd say yes. It
just isn't clear that that can be done.

>I can see it now,
>over a crude link, we discover that we cannot tell the difference between
>man and machine, then we hook up a video link, and the difference 'becomes
>apparent.'

If that were the case, it would seem that the "apparent difference" would
be identical to the difference you get between a blind man and a sighted
man.  Are we therefore to conclude that the blind are only artificially
intelligent?

>--eugene miya
>  NASA Ames Research Center

        <mike

------------------------------

Date: 30 Jul 84 10:25 PDT
From: Woody.pasa@XEROX.ARPA
Subject: Turing tests

There's this accounting computer at the Santa Fe, (where my dad works),
and before it was installed, accounting was something which needed a very
intelligent person to do.  It required a high level of intelligence to keep
the books balanced, the type of intelligence a machine could never have.
The Santa Fe uses a computer to keep all their books now.
But note that the discussion with accounting is now not "The computer is
intelligent--look, it can keep the accounting books for an entire company",
but "Gee, anyone can keep the accounting books; even a computer."

The Turing test is a poor test, granted; but can there be a more
generalized test to tell if a computer is truly intelligent?  With the
Turing test, we can give the computer and the human at the other end a
test in math, understanding, and creativity; we could even talk about
the presidential elections; we're not restricted to the things that
have been discussed earlier.

As for hooking up a camera to the computer and using visual
identification as a test for intelligence: I know of a few blind
people who would be hard-pressed to past that test.  Sure, it takes a
lot to be able to see, but then most mice can see, and some humans
cannot; does that make the mice smarter than the humans?

  - Bill Woody
    1-60 Caltech
    Pasadena, CA 91126

------------------------------

Date: 25 Jul 84 12:06:17-PDT (Wed)
From: hplabs!hao!seismo!rochester!rocksvax!rocksanne!sunybcs!gloria!colonel
      @ Ucb-Vax.arpa
Subject: Re: can computers understand?
Article-I.D.: gloria.400


As long as the problem of "understanding" has come up again, here's
a provoking quotation:

        In this argument [deleted] commits two blunders.  He interprets
        understanding as the limit of an evolutionary process of
        baconian observation, and he treats understanding, like
        intelligence, as a fixed property independent of its
        possessor.

        To understand is to assimilate a process foreign to oneself.  A
        machine does not "understand" how to make screw eyes, because
        that is part of its function. ... When we examine [deleted]'s
        argument closely, it reduces to two familiar ideas:  the
        logical idea that all understanding rests on knowledge of the
        principles of physics, and the psychological idea that
        understanding is necessary for the sake of controlling.
        ... The ideal of [deleted]'s theory would be a computer that
        "understands" natural language well enough to be able to make
        people do its bidding.
                                        --Maia I. Aimless (1979)

--
Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 5 Jul 84 10:10:42-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!philabs!linus!vaxine!wjh12!harvard
      !seismo!hao!ames-lm!eugene @ Ucb-Vax.arpa
Subject: Expert System Test
Article-I.D.: ames-lm.383

With regard to expert systems, I thought of an interesting
[take this with a grain of salt] set of tests to evolve or refine
the development of such systems.  These tests would test the expertise
of such systems.  Take a classic system like MYCIN.
When the developers feel the system is ready for a shake down,
[remember, this is not entirely serious, but not for the weak of heart]
infect the developers of the system with one of the diseases in the
knowledgebase, and let them diagnose their own ailment.
There might be interesting evolutionary consequences in software development.

Similarly, other people developing other systems would put their
faith and lives on the line for the software systems they develop.
Are these systems, truly 'expert?'

Admittedly, not a rigorous test, but neither was Turing's.

The above are opinions of the author and not the funding Agency.

--eugene miya
  NASA Ames Research Center
  {hplabs,hao,dual}!ames-lm!aurora!eugene

------------------------------

Date: Wed 1 Aug 84 18:45:25-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Speech Recognition Using Lexical Information

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

  LEXICAL ACCESS USING PARTIAL INFORMATION

By Daniel P. Huttenlocher, Massachusetts Institute of
   Technology, Friday, August 3, 2 p.m. in the Trailers'
   Conference Room next to Ventura Hall.


ABSTRACT:  Current approaches to speech recognition rely on classical
pattern matching techniques which utilize little or no language knowledge.
We have recently proposed a model of word recognition which uses
speech-specific knowledge to access words on the basis of partial
information.   These partial descriptions serve to partition a large lexicon
into small equivalence classes  using sequential phonetic and prosodic
constraints.  The representation is attractive for speech recognition system
because it allows all but a small number of word candidates to be excluded
using only a crude description of the acoustic signal.  For example, if the
word ``splint'' is represented according to the broad phonetic string
[fricative][stop][liquid][vowel][nasal][stop], there are only two matching
words in the 20,000 word Webster's Pocket Dictionary, ``splint'' and ``sprint.''

Thus, a partial representation can both greatly reduce the space of possible
word candidates, and be relatively insensitive to variability in the speech
signal across utterance situations. This talk will discuss a set of studies
examining the power of such partial lexical representations.

------------------------------

End of AIList Digest
********************

∂04-Aug-84  0512	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #100    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Aug 84  05:11:18 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Fri  3 Aug 1984 11:43-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #100
To: AIList@SRI-AI


AIList Digest             Friday, 3 Aug 1984      Volume 2 : Issue 100

Today's Topics:
  LISP - Common Lisp Implementations,
  Simulation - Information Mechanics,
  Demos - U. Texas Demos at AAAI,
  Newsletters - Canadian Artificial Intelligence Newsletter,
  Robotics - Challenge,
  Reviews & Humor - Expert Systems, Fuzzy Logic, and Fuzzy Batteries,
  Expert Systems - $12,000 Software?
----------------------------------------------------------------------

Date: Fri, 3 Aug 84 01:00:23 pdt
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Common Lisp

Is there a commercially available implementation (by which I mean either
sold by a for-profit company or available from educational and research
institutions for use by for-profit companies) of Common Lisp for either
4.2 bsd Unix on a VAX, or for Version 7 Unix on a 68000?  If anyone out
there has any leads, I would most appreciate hearing of them.

                                                    Harry Weeks
                                                    (g.weeks@Berkeley)

------------------------------

Date: 3 Aug 1984 13:56:42-EDT
From: sde@Mitre-Bedford
Subject: Total simulation and Information Mechanics

Someone asked if I could abstract (what I recall as) the I.M. argument that
total simulation requires at least the mass of the thing to be simulated.
Rather than chance noise in the channel, I think it would be better to inquire
of Dr. Frederick W. Kantor, who may be telexed at 4998124, answer-back
KANTOR FW, or MCI mail #2050656.
Dr. Kantor is the man who laid the foundation for the field of Information
Mechanics, which name is taken from his book. John Wiley & Sons, Inc.,
publishers, asked for permission to publish his research notes, which are
available as a book (copies from 2nd printing) which is the defining monograph
world-wide for the field. Dr. Kantor set down the name for the field
"Information Mechanics" as the last two words of the monograph.

If the above seems other than what was requested, my apologies, but I think
that Fred is the best person to deal with the matter.

   David   sde@mitre-bedford

------------------------------

Date: Mon 30 Jul 84 14:44:35-CDT
From: Gordon Novak Jr. <CS.NOVAK@UTEXAS-20.ARPA>
Subject: U. Texas Demos at AAAI

The University of Texas at Austin will hold an open house and series of
demonstrations during AAAI-84.  The demonstrations will be from 7-9 PM
on Tuesday, August 7.  The demonstrations are not listed in the conference
schedule, but a brochure describing them and showing where they are located
will be provided in the registration packet.  Bus service will be provided.
The demonstrations will include:

J. K. Aggarwal                 Laboratory for Image and Signal Analysis
R. S. Boyer, J S. Moore        The Boyer-Moore Theorem Prover
Shang-Ching Chou               Geometry Provers Based on Wu's Algorithm
Larry Hines                    Inequality Theorem Prover
W. Lehmann, O. Gajek,          METAL German-to-English Machine Translation
  J. Slocum, J. White, B. Root
Robert Levinson                A Self-Organizing Retrieval System for Graphs
Gordon Novak                   GLISP Language and GEV Data Inspector
Gordon Novak                   ISAAC Physics Problem Solver
Elaine Rich, William Murray    Automatic Debugging System for LISP
Robert F. Simmons, Chin Chee   Device Simulation
Robert F. Simmons              An English to Horn Clause Translation System

------------------------------

Date: 2 Aug 84 12:54:24-EDT (Thu)
From: Graeme Hirst <gh%toronto.csnet@csnet-relay.arpa>
Subject: Canadian Artificial Intelligence Newsletter

.    CANADIAN  A R T I F I C I A L   I N T E L L I G E N C E   NEWSLETTER    .

                   Publication begins in September 1984
                   ====================================

A subscription to the /Canadian Artificial Intelligence Newsletter/ is included
in membership in CSCSI/SCEIO, the Canadian artificial intelligence society.
All members will soon be receiving the first issue.

The /Newsletter/ will include news reports from industry and universities,
opinions, reviews, abstracts of recent research, and announcements of general
interest in A.I.  To join the society and receive the /Canadian A.I.
Newsletter/ quarterly, just send a note to CIPS (which administers membership
for the society), with the appropriate fee (see below).  Join now, to be sure
of receiving the first issue.  ** Non-Canadian members are welcomed. **

       CIPS
       243 College Street, 5th floor
       Toronto, CANADA  M5T 2Y1

Membership: $10 regular, $5 students (Canadian funds); there is a discount of
$2 for CIPS members. Payment may be made in U.S. dollars at the current rate
of exchange (C$1.00 = US$0.76).

Articles and other material for the newsletter may be submitted to the editor:
Graeme Hirst, Department of Computer Science, University of Toronto, Toronto
Canada M5S 1A4.  Phone: 416-978-8747.    UUCP: ...!utcsrgv!cscsi
CSNET: cscsi@toronto    ARPANET: cscsi.toronto@csnet-relay

------------------------------

Date: Fri 3 Aug 84 10:29:50-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Robotics

The following challenge appears in the Forum column of the August issue
of IEEE Spectrum:

                         The Canine Computer

Having seen the June issue, I would like to raise a question about the
ability of roboticists to fuse technology with canine capabilities,
let alone human ones.  I hereby challenge the world's robot experts to
duplicate electronically the performance of my little dog, who is able
to catch morsels of raisin buns that I toss to him occasionally as he
sits patiently and expectantly beside the table.

His performance is spectacular.  He is able to calculate the parabolic
trajectory of the morsel, regardless of its height or direction, and
catch it in his mouth, often in the split second before it has reached
its apogee.  He can do this at all levels---from a crouch to catch low-
flying morsels, to a jump to catch high ones.  His accuracy is
astounding, showing that his internal computer can calculate a parabolic
course and give complete and elaborate instructions to his nervous
system, including the opening and closing of his mouth at the right
microsecond.  About 5 percent of the time the morsel hits the tip of
his nose and bounces off in a random direction.  This event is followed
by a lightening retrieval from the floor where it lands.  (On one
occasion he was able to catch a morsel on a second try after it had
bounced off his nose.)

My dog weighs 37 pounds.  Can anyone build a robot that can equal this
dog's operation while on a smooth linoleum tile floor and in an
illumination of about 15 footcandles?  Can it be done without the
37-pound restriction?

I offer no prize for this accomplishment.  Perhaps some wealthy
philanthropic roboticist would like to step forward.  Until electronic
technology can equal the computer in the brain of a little dog its
very honor is at stake.

                                        William B. Elmer
                                        Thornton, N.H.


The IEEE Spectrum editors then mention that John Billingsly is
organizing a contest for Ping-Pong playing robots, to be held at the
Computer Fair in London in 1985.  Dr. Billingsly's address is:
Dept. of Electrical and Electronic Engineering, Portsmouth Polytechnic,
Anglesea Road, Portsmouth, England.  The International Personal
Robot Congress and Exposition will hold the U.S. trials for a 1986
Ping-Pong contest during the March 1985 meeting: contact IPRC, 777
Locust St., Denver, Colorado 80220.


This same issue contains a favorable book review of Ayres and Miller's
Robotics: Applications and Social Implications.

                                        -- Ken Laws

------------------------------

Date: Fri 3 Aug 84 11:06:23-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Expert Systems, Fuzzy Logic, and Fuzzy Batteries

I have recently run across the July 9 issue of Business Week, which
featured Artificial Intelligence as its cover story (pp. 54-62).
Much of the article discussed expert systems and the 40 or so companies
now trying to market them.


The August 1984 issue of IEEE Spectrum contains an excellent article by
Lotfi Zadeh about fuzzy logic and its applications to process control,
robot navigation, database access, expert systems, and other topics.
He mentions that fuzzy mathematics now includes the theory of fuzzy
topological spaces, fuzzy measures, fuzzy groups, fuzzy random variables,
fuzzy arithmetic, fuzzy analysis, fuzzy stability theory, fuzzy systems,
and fuzzy graphs.  Dr. Zadeh presents a good case for fuzzy linguistics
and fuzzy reasoning (as in MYCIN and PROSPECTOR) as essential elements
of expert systems and learning systems.


For more fuzzy talk, the issue reprints an 1884 Life magazine article about
the cat battery.  An excerpt:

    Cats, according to Tyndall, are either electro-positive or
    electro-negative.  When in the neutral state (see Plate I) both
    fluids are combined, and the most sensitive galvanometer can detect no
    current.  Thus insulated, neither A nor B exhibits either attraction
    or repulsion for surrounding objects, excepting for a hot stove or a
    piece of fish.  But this affinity, according to the recent
    investigations of Siemens and Halske, is the result of chemical and
    not electrical attraction.

    Now, however, let us submit electro-positive cat A and
    electro-negative cat B to exciting influences (see Plate II).
    Instantly we observe the development of electrical energy -- A being
    strongly positive that he is the better cat, while B is as violently
    negative.  This, as has been proved by the experiments of Prescott,
    Edison, and others, is due to induction; each cat trying to induce the
    other to believe that he isn't afraid.

    This electrical state of activity is accompanied by all the well-known
    electro-static phenomena.  The hairs of each cat stand on end, and
    surrounding objects -- such as bootjacks, soap, cough-medicine
    bottles, and crockery -- may be attracted with great velocity from
    distances of 100 to 250 feet.

    Cats are absolute non-conductors.  This fact was discovered in 1876 by
    Gerritt Smith, while vainly endeavoring to conduct a cat out of the
    coal cellar.  It might be urged, therefore, that they had high
    internal resistance.  This is not true.  The external resistance
    (again glance at Plate II) is very high, but the internal resistance
    is never over one Ohm ("'ome" or "home," to give German, English, and
    American terms), while in many cases it is less, as is witnessed by
    the fact that there are 1,317,009 ohmless cats in this city alone.
    But while the internal resistance is surprisingly low, the intensity
    is so high that by inductive influence alone two cat elements can
    maintain a whole neighborhood in a state of electrical excitement for
    hours.    [...]


Speaking of fun with words, this issue of Spectrum also quotes a poll showing
that "chemists, if not actually better than all other human beings, are,
to say the least, a credit to their race and a damned fine group of
upstanding and patriotic Americans, all of whom embody the finest attributes
that can be attributed to those to whom those attributes can be attributed."
[From Ralph Steinhardt Jr. and David Weinman, "The Courteous Retort,"
Chemtech, Vol. 14, No. 6, June 1984.]

                                        -- Ken Laws

------------------------------

Date: 2 Aug 1984 10:17-PDT
From: the tty of Geoffrey S. Goodfellow <Geoff @ SRI-CSL>
Subject: $12,000 Software?


San Francisco Sunday Examiner & Chronicle, July 29, 1984

John Dvorak
PERISCOPE

$12,000 software?

Would you pay $600 apiece for gold-plated lug nuts to be used on the
beat-up rims of a '52 Ford pickup truck?  What would you think of the
marketing man who suggested such a products?  I'd think he was crazy.

There's a company down in Palo Alto that has a software package it
would like someone to buy.  (A little background music, please) The
company is called Teknowledge Inc. and was started by Stanford
professors.

The company makes software for the IBM PC.  You can buy an IBM PC for
around $2,000.  The software this company sells costs $12,000.  It
accomplishes one thing: It allows you to test an idea to see whether
an expert system can be built around the idea.  An expert system is a
computer system that solves complex problems using so-called
artificial intelligence.

An example of an expert system is a program called Mycin.  It was
developed in the 1970s to diagnose meningitis and other infections.  A
user tells the computer certain requested facts and the computer then
leads the user to something close to a diagnosis.

Now most people would have made a package like this and had it run on
an IBM mainframe computer or at least a VAX minicomputer.  But to put
a $12,000 piece of software on a personal computer is cavalier, to say
the least.  It's as if to say, "Yeah, we've got this package and we
know what it's worth, and we're going to let the well-heeled
government-financed research companies use it.  Look, we can even make
it run on a personal computer--look, but don't touch.

This is the old pre-micro attitude toward software.  It was proven to
be myopic when companies like MicroPro, Ashton-Tate and MicroSoft
started doing business in excess of $50 million a year by selling for
$500 software that would have cost $12,000 if marketed by Teknowledge.
MicroPro decided that everyone needed the power of a dedicated word
processor, Ashton-Tate felt that more than just a few dozen researchers
would like the power of a relational database, and MicroSoft felt that
a computer language would be popular if available for $350 instead of
$35,000.

OK, so forget about the price.  What can the Teknowledge package do?
The system is called M.1 (pronounced M dot 1 by my friends).
According to its own press release, you spend the $12,000 "for rapid
prototyping of potential full-scale operational systems.  In addition
to establishing technical feasibility, these example systems serve as
useful demonstrations."  That means this investment just gets you
started--started spending, that is.

Luckily, the system can also create a stand-alone expert system with
up to 200 knowledge base entries.  I'm not impressed.  The company
goes on to exemplify this stand-alone value with a "Wine Adviser"
expert system with 100 knowledge base entries.  Here's the actual
output from this "expert" system.  This is called by the company "the
deliberation process of a typical California wine expert." The computer
asks the question and the user responds.

Do you generally prefer red or white wines?    Red.

Do you generally prefer light, medium or full-bodies wines?    Full.

Is the main component of the meal meat, fish or poultry?    Poultry

Does the meal have turkey in it?    No.

Is the sauces for the meal spicy, sweet, cream or tomato?    Tomato.

Is the flavor of the meal delicate, average or strong?    Average.

The following wines will mostly be dry, medium-bodies, and red.  They
are recommended for your meal:  Zinfandel (86%); Cabernet Sauvignon
(86%); Burgundy (34%); Valpolicella (34%).


At this point every wine connoisseur is turning over in his grave.  So
the user goes out and buys a Zinfandel from Amador County laced with
residual sugar and 15 percent alcohol, drinks it with his chicken,
gags and decides that this "expert" system is useless.

The fact is that even the most mundane expert systems such as this are
infinitely complex and impossible to develop with the limitations
imposed by this $12,000 diskette.


[To be fair, I doubt that Technowledge intended this expert system
to be taken seriously as a "wine advisor" if they have given it no
knowledge of individual wines.  It is more likely a demonstration
of the type of program and level of sophistication that could be
handled with their system.  If someone with inside knowledge wishes
to defend the system, I will provide a reasonable amount of AIList
"space" for the reply.

Another point:  It appears to me (from typos and other signs) that
this message was typed in and not lifted from a newswire.  I am willing
to distribute such messages (on the sender's responsibility), but I have
to be a little more conservative about passing along newswire copy.
Certain universities get the newswires gratis as a stimulus to research
in automated information retrieval.  This service will be discontinued
if it appears that our net is publishing the material in competition with
other news providers.  Warning suspensions have already occurred.  I
therefore ask readers to be selective about sending in text from
newspaper items, preferably sending only summaries or extracts (with
proper credit given).

On the other hand, I greatly appreciate it when readers send in
informative pieces like this one.  Thanks, Geoff!  -- KIL]

------------------------------

End of AIList Digest
********************

∂04-Aug-84  2220	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #101    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Aug 84  22:20:02 PDT
Date: Sat  4 Aug 1984 21:16-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #101
To: AIList@SRI-AI


AIList Digest             Sunday, 5 Aug 1984      Volume 2 : Issue 101

Today's Topics:
  AI Tools - DOE MACSYMA,
  Intelligence - Turing Test,
  Games - Chess Delphi Game & Zebra Problem,
  Seminar - Learning Implementation Rules in Circuit Design
----------------------------------------------------------------------

Date: 30 July 1984 16:36-EDT
From: Harten.Paradigm at MIT-MULTICS.ARPA
Subject: DOE MACSYMA AVAILABLE FOR NOMINAL FEE

  [Forwarded from the Arpanet-BBoards distribution by Laws@SRI-AI.]


           DOE MACSYMA AVAILABLE FOR NOMINAL FEE

This message should receive the widest possible re-transmission.
Paradigm Associates, Inc.  is pleased to announce that the DOE
MACSYMA program is available from NESC.  DOE MACSYMA runs on the
DEC VAX-series of computers under the VMS operating system, and
corresponds quite closely with the 1982 version of MIT-MC MACSYMA
(the translator/compiler interface work, but advanced plotting
features need work.) Those not already a member of NESC should
contact:

National Energy Software Center
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
Attention: Jan Mockler

for information about joining, and should inquire about accession
number 9847.  We are advised that there are two VAX-VMS BACKUP
format tapes with 40 MB of code: source to DOE MACSYMA and NIL,
object modules, executable images, control and auxiliary
information.  This work is supported by DOE Contract W-7405-ENG-48.

Users wishing to contribute programs and codes to the New MACSYMA
Users Consortium, for re-distribution through NESC, may send their
material to Leo Harten, Paradigm Associates, Inc., 29 Putnam
Avenue, Suite 6, Cambridge, MA 02139, or to
Harten@Mit-Multics (Multics mail is no longer case-sensitive).
[This is a volunteer effort for the improvement of the SHARE
libraries in DOE MACSYMA.]

------------------------------

Date: 31 Jul 84 13:54:05-PDT (Tue)
From: ihnp4!mhuxl!ulysses!unc!mcnc!philabs!linus!utzoo!dciem!mmt@Ucb-Vax.arpa
Subject: Re: Should The Turing test be modified
Article-I.D.: dciem.1012

The Turing test played over a teletype can give way to one played
over a graphics terminal without laying any less bare the intelligence
causing the display.  But there is an interesting lead article in
a recent issue of Science (July something) on impacts of computers,
which has something possibly relevant to say.  In the experience of
IBM, the networking facilities have been used almost never by scientists
to do joint work from one site to another, sometimes by engineers
on major projects, and frequently by managers.  Could it be that the
subtle concepts required by scientists do not transmit well over
current technology, but that the simpler ideas used repetitively by
managers are satisfactorily handled?
  If there is some kind of a technology limitation on the power of
thought conveniently communicated, then the Turing Game should be
updated whenever new technology permits.  The only thing that should
be unfair is to demand a sight of the testee, or to demand that
he/she/it move voluntarily or perform actions not expressible on
a current technology computer terminal.
--

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 3 Aug 84 11:35-PDT
From: mclure @ Sri-Unix.arpa
Subject: chess delphi game

Since the chess delphi game moves are not being published on
ailist any more, I would like to point out that one list
does receive the moves. It is chess@sri-unix. If you would
like to be added, send a note to chess-request@sri-unix.
We're on move 5 of the delphi game now.

        Stuart

------------------------------
Date: 2 Aug 84 14:05:11-PDT (Thu)
From: hplabs!hpda!fortune!amd!decwrl!dec-rhea!dec-kirk!williams@Ucb-Vax.arpa
Subject: Some thoughts on problem solving
Article-I.D.: decwrl.3076


                SYMBOLOGY AND THE STUDY OF PROBLEMS

Here is a problem which was presented to me in net.puzzle.

First I will solve the problem and then describe a proposed method of
solving it artificially.


Newsgroups: net.puzzle
Path: decwrl!decvax!ittvax!dcdwest!sdcsvax!sdcrdcf!hplabs!tektronix
      !teklds!azure!harriett
Subject: WHO OWNS THE ZEBRA
Posted: Mon Jul 30 13:14:07 1984


        The following is a brainteaser I got a long time ago. Many have tried
and failed, many have guessed. It is possible to solve alone or in a group.
If you really want to give yourself a brain hernia, try to solve it in you
head without writing anything down (It can be done, that is the way I solved
it the first time I tried it, it took about 15 to 20 hours over a three day
period!!!!!!)    [...]


        WHO OWNS THE ZEBRA .................

ON A CITY STREET, STRANGER ACCOSTS STRANGER WITH A XEROXED SHEET OF PAPER
AND THE QUESTION: "HAVE YOU SEEN THIS?". IN UNIVERSITY DORMITORIES THE
PROBLEM IS TACKED TO DOORS, MUCH AFTER THE MANNOR OF MARTIN LUTHER.
IN SUBURBAN HOUSEHOLDS THE RING OF THE TELEPHONE IS LIKLEY TO HEARLD
A VOICE THAT ASKS 'IS IT THE NORWEGIAN?'

THE CAUSE OF THE EXCITEMENT IS THE BRAINTEASER BELOW. IT'S HARD, BUT
CAN BE SOLVED BY USING DEDUCTION, ANAYLSIS, AND A LOT OF PERSISTENCE.

1.      THERE ARE FIVE HOUSES, EACH OF A DIFFERENT COLOR AND INHABITED BY
        MEN OF DIFFERENT NATIONALITIES, WITH DIFFERENT PETS, DRINKS, AND
        CIGARETTES.

2.      THE ENGLISHMAN LIVES IN THE RED HOUSE

3.      THE SPANIARD OWNS THE DOG.

4.      COFFEE IS DRUNK IN THE GREEN HOUSE

5.      THE UKRAINIAN DRINKS TEA.

6.      THE GREEN HOUSE IS IMMEDIATELY TO THE RIGHT (YOUR RIGHT) OF THE
        IVORY HOUSE.

7.      THE OLD GOLD SMOKER OWNS SNAILS.

8.      KOOLS ARE BEING SMOKED IN THE YELLOW HOUSE.

9.      MILK IS DRUNK IN THE MIDDLE HOUSE.

10.     THE NORWEGIAN LIVES IN THE FIRST HOUSE ON THE LEFT.

11.     THE CHESTERFIELD SMOKER LIVES NEXT TO THE FOX OWNER.

12.     KOOLS ARE SMOKED IN THE HOUSE NEXT TO THE HOUSE
        WHERE THE HORSE IS KEPT.

13.     THE LUCKY STRIKE SMOKER DRINKS ORANGE JUICE.

14.     THE JAPANESE SMOKES PARLAIMENTS.

15.     THE NORWEGIAN LIVES NEXT TO THE BLUE HOUSE.


NOW ...........

WHO DRINKS WATER?

AND ...........

WHO OWNS THE ZEBRA?

GOOD LUCK!

                ................ put that in your .bin and smoke it!!!!!

                        Harriette L. Lilly
                        Tektronix MDP Marketing
                        Technical Support
                        Beaverton ORG.
                        tekmdp!harriett


[I first saw this problem in Reader's Digest about 1964 -- does anyone
know the original source?  My first LISP program was an attempt to solve
this puzzle by simple constraint propagation (or elimination of terms
in the space of all conceivable solutions).  The program had some
trouble with the "next to" or "right of" relations, since these had
to be expressed as a set of more primitive constraints that I entered
by hand.  The program ultimately failed when it reached a choice point
requiring a binary choice and possible backtracking; I had not built
such sophistication into the control structure.  -- KIL]


*************************  SPOILER WARNING  ******************************

The answer to the five house problem is not so straight forward.

The first step to take was to solve the orientation of the houses.

The second house to the left was blue, being next to the norwegian on the
far left. The norwegian could not be red, that belonging to the englishman.
The norwegian could not be ivory, because of it's relative orientation to
green, nor could it be green leaving the second house from the left ivory.
The norwegian owned the yellow house. The middle house could not be green,
for the middle house drank milk, and not coffee, nor could it be ivory,
making the second house green, so it was red. The green and ivory fell into
place at this point.

YELLOW          BLUE            RED             GREEN           IVORY
Norwegian                       English
                Horse
                                Milk            Coffee
Kools

The next step was to resolve who had what cigarettes.

The norwegian had the kools, which were in the yellow house, and the japanese
had the parliaments. Neither the ukrainian nor the englishman could have the
lucky strikes because they had a different drink than orange juice. This
meant that the old golds and the chesterfields were mutually inclusive to
the ukrainian and the englishman leaving the spaniard having the lucky
strikes.

The next step was to solve who had what drinks.

The ukrainian had the tea, the englishman had the milk, and the spaniard
had the orange juice. This meant that the water and coffee were mutually
inclusive to the norwegian and the japanese. Since the norwegian's house
was yellow, he could not have the coffee. Therefore, the japanese had the
coffee and the NORWEGIAN HAD THE WATER.

The next step was to solve who lived where.

The norwegian lived in the yellow house, the englishman lived in the red
house, and because the japanese drank coffee, the japanese lived in the
green house. This meant that the blue and the ivory houses were mutually
inclusive to the ukranian and the spaniard. Since the blue house had a
horse, and the spaniard had a dog, this meant that the ukrainian owned
the blue house and the spaniard owned the ivory house.

The next step was to solve who had what cigarettes previously incomplete.

The old gold and the chesterfields belonged mutually inclusively to the
ukrainian and the englishman. Since the ukrainian lived in the blue and
had the horse, and the man with the snails also had the old golds, the
ukrainian had the chesterfields and the englishman had the old golds.

The next step was to solve who had what animals.

The ukrainian had the horse, the englishman had the snails, and the
spaniard had the dog. This meant that the fox and zebra belonged mutually
inclusively to the norwegian and japanese. Since the japanese did not live
next to the ukrainian, who had the chesterfields, he could not have the
fox. Therefore, the norwegian had the fox and the JAPANESE OWNED THE ZEBRA.

YELLOW          BLUE            RED             GREEN           IVORY
Norwegian       Ukrainian       English         Japanese        Spaniard
Fox             Horse           Snails          Zebra           Dog
Water           Tea             Milk            Coffee          Juice
Kools           Chesterfields   Old Golds       Parliaments     Lucky Strikes


This one definitely had a few wrinkles.

                John Williams           Digital Equipment Corperation





It appears to me as though the key to solving this problem was to
discover mutually inclusive sets of symbols, in this case, pairs.

Given:

a = x or y
b = x or y
a <> b <> c
c = x or y or z

Then:

c = z


This could be utilized by first defining sets ( or lists, for you lisp fans ),
of the various categories, owners, pets, drinks, etc., and a basic initial
condition. That is, each symbol has with it a list of possible connections.

The englishman is exclusively connected to the color red, whereas the japanese
is connected to all colors. The norwegian is connected to the left most house,
etc. The process is accomplished by rotating the context, that is, looking
for inclusive sets in convoluting categories, eliminating possible connections
until a stable state is achieved.

When I solved the problem, or should I say, when I wrote down the answer,
I naturally chose the most direct context switches for analysis. I do
not believe that this is nessesary. It would only mean that in an artificial
analysis, there would be contexts where analysis would not perform any
reductions. The choice of context was on my part, intuitive, and for a
finite problem, would only mean an increase in the amount of time required
to solve the problem, or prove it couldn't be solved.

I think a program like this would be an interesting study of problem
reduction. The formation of symbols in this program would be an even
more interesting problem.

                        < puzzled? >

(DEC E-NET)     KIRK::WILLIAMS
(UUCP)          {decvax, ucbvax, allegra}!decwrl!dec-rhea!dec-kirk!williams
(ARPA)          williams%kirk.DEC@decwrl.ARPA
                williams%kirk.DEC@Purdue-Merlin.ARPA

------------------------------

Date: 3 Aug 84 13:04:46 EDT
From: LOUNGO@RUTGERS.ARPA
Subject: Seminar & Binding - Learning Circuit Design

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                 R U T G E R S   U N I V E R S I T Y
                    Department of Computer Science
                              COLLOQUIUM


Speaker:        Masanobu Watanabe

Title:          LEARNING IMPLEMENTATION RULES IN CIRCUIT DESIGN
                BY HARMONIZING BEHAVIORS WITH SPECIFICATIONS

Date:           Friday, August 3, 1984



The problem of expertise acquisition by monitoring the user's response to
advice offered by the system is considered here as an implementation
rule acquisition problem in a domain of VLSI circuit design.  The task
is characterized as learning a Macro-operator in a problem space,
where data-streams and modules are viewed as states and operators,
respectively.  A Goal-Directed-Learning [Mitchell 83a] approach toward
justifiable generalization by analyzing a single training instance is
then applied to this problem.  Both the usefulness of its approach and
the remaining issues are clarified by examination through examples.


Masanobu Watanabe will be leaving Rutgers to return to Japan.
His office address is:

Computer System Research Laboratory
C&C Systems Research Laboratories
NEC Corporation
1-1, Miyazaki 4-Chome,
Miyamae-Ku, Kawasaki
Kanagawa 213 Japan
Tel (044)855-1111 ex.2275

------------------------------

End of AIList Digest
********************

∂08-Aug-84  1054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #102    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 Aug 84  10:54:12 PDT
Date: Wed  8 Aug 1984 09:12-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #102
To: AIList@SRI-AI


AIList Digest           Wednesday, 8 Aug 1984     Volume 2 : Issue 102

Today's Topics:
  LISP - Compatibility & Conversion,
  Simulation - Self-Simulation,
  Applications - Computer-Mediated Social Interaction,
  AI in Engineering - SIGART Call for Papers,
  Conference - AISB-85: Call for Papers
----------------------------------------------------------------------

Date: 4 Aug 84 17:35:00-PDT (Sat)
From: pur-ee!uiucdcs!uiucuxc!chandra @ Ucb-Vax.arpa
Subject: Charniak's "AI-Prog." book: Code Reqst
Article-I.D.: uiucuxc.28900003


Help !!!

        I have been using the AI-programming book by Charniak, Riesbeck
and McDermott as a reference. I however do not have access to the Lisp
dialect in their book [UCI-LISP]. I am working in MacLisp/Franz Lisp
and am having some trouble performing the conversion.

        Judging from the popularity of the book, I wondered if somebody
had already written some code to perform the conversion.

        I also have access to Zeta-lisp and Interlisp.

        Do you think you can help me?

                                Thank you,

                                Navin
                                Phone 1-800-USA-CERL

------------------------------

Date: 7 Aug 1984 15:28:26-EDT
From: kushnier@NADC
Subject: Interlisp/Zetalisp Compatibility


Can someone tell a novice the difference between Interlisp and Zetalisp..In pl
plain English?


                               Thanks
                               Ron Kushnier
                               kushnier@NADC.arpa

------------------------------

Date: 6-Aug-84 22:35 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2.ARPA>
Subject: Self-Simulation

How long would a simulation of its own lifetime survive?

What would be the features of the most viable such simulation feasible in the
next year or so?

   For example, would it be a collaboration on a model of THAT collaboration
   process done in order to gain some insight into its most viable forms?

Would it be a computer simulation?

Would it aim to encourage anyone (with a modem?) to play with it?

How would those who contribute to the model or provide simulation services be
compensated?

How would proposed changes to the model be moderated?

What would be its measure of its aliveness?

   The self reference makes this question somewhat tricky to answer, though we
   usually manage to do it in one fashion or another with respect to our own
   lifetimes.  The trickiness comes when playing with some portion of the model
   central to its own self definition.  We must make sure that after we are done
   playing, the measures of its aliveness are still valid.  Changing the
   measures can in turn affect a central part of the model ...

What would be the features of the language in which the model is written?

 -- kirk

------------------------------

Date: 21 July 1984 1051-PDT (Saturday)
From: sdcsla!bannon@nprdc
Subject: Computer-Mediated Social Interaction

      [Forwarded from the WorkS discussion list by Laws@SRI-AI.
      This seems to lead into the question of how AI can be used
      in computers for mediating human interactions.]


I am interested in collecting information on the use of computers to
mediate interactions between people.  It appears to me that our
computer systems today do not provide much in the way of support for
cooperation - joint problem-solving, sharing of information,
co-operative production of text, on-line (human) expert assistance.
If people know of experiments carried out in this domain, or of
experimental software facilities to support such activities, I would
appreciate if they could mail me information - references, personal
experiences, anecdotes, etc.
[As an example, how useful have people found the "link" command on
the TENEX system?]

(PS. I know about computer conferencing, my focus is more on other,
perhaps less-publicized facilities, but comments on the USE of
conferencing systems would be of interest.)

I will summarize the results of the survey to the [WORKS] net. Thank you.

        Liam Bannon
        Institute for Cognitive Science, C-015,
        UCSD, La Jolla, CA 92093.
        (619)-452-2807
             (452-6771)
or,
        bannon@nprdc                            -on the arpanet

        ....ucbvax!sdcsvax!sdcsla!bannon        -on the net

------------------------------

Date: Tue 7 Aug 84 11:36:19-EDT
From: Duvvuru Sriram <Duvvuru.Sriram@CMU-CS-C.ARPA>
Subject: AI in Engineering


                       SPECIAL ISSUE ON APPLICATIONS OF
                               AI IN ENGINEERING

The  April  1985 issue of the SIGART newsletter (tentative schedule) will focus
on the applications of AI in engineering. The  purpose  of  this  issue  is  to
provide  an overview of research being conducted in this area around the world.
The following topics are suggested:

   - Knowledge-based expert systems
   - Intelligent computer tutors
   - Representation of engineering problems
   - Natural language and graphical interfaces
   - Interfacing engineering databases with expert systems

The above topics are by no means exhaustive; other related topics are welcome.

Individuals or groups conducting research in this area and who  would  like  to
share  their  ideas  are invited to send two copies of 3 to 4 page summaries of
their work,  preferably  ongoing  research,  before  December  1,  1984.    The
summaries  should  include  a  title,  the  names of people associated with the
research, affiliations, and bibliographical references.  Since the primary  aim
of  this  special  issue  is  to provide information about ongoing and proposed
research, please be as brief  as  possible  and  avoid  lengthy  implementation
details.    Submissions should be sent to D. Sriram at the following address or
through Arpanet to Sriram@CMU-RI-CIVE:

      D. Sriram
      Design Research Center
      Carnegie-Mellon University
      Pittsburgh, PA 15213
      Tel. No. (412)578-3603

------------------------------

Date: Wednesday,  8-Aug-84 12:15:07-BST
From: BUNDY HPS (on ERCC DEC-10) <Bundy%edxa@ucl-cs.arpa>
Subject: AISB-85: Call for Papers


                     The Society For The Study Of
          Artificial Intelligence And Simulation Of Behaviour
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

CALL FOR PAPERS         AISB 85       WARWICK, ENGLAND, APRIL 10-12 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

Submissions are invited for the AISB Easter 1985 conference, to be  held
at  the University of Warwick on April 10-12 1985.  Papers may be on any
aspect of AI, including though not necessarily restricted to

       AI and Education                Reasoning
       Learning                        Knowledge Representation
       Robotics                        Vision
       Natural Language                Cognitive Modelling
       Expert Systems                  Architectures and Languages
       Planning                        Speech

Papers should  ideally  relate  to  practical  or  theoretical  work  in
progress  or  completed. Those intending to submit a paper should make a
preliminary submission of a provisional title and abstract of up to  100
words and a provisional list of keywords.
          Deadline for notification: November 1st 1984

Full papers, of 2000-5000 words, should  be  on  A4  pages  and  double-
spaced.  Three  copies  should be submitted. The first sheet should give
the title, names of authors, a brief abstract and a list of keywords, to
help in the assigning of referees.  The paper itself should start on the
next page, and authors' names should not appear in the main body of  the
text.
          Deadline for full papers: December 7th 1984

Authors will be notified  of  referees'  decisions  around  the  end  of
January  1985.   Final copies, for photo-reproduction, will be needed by
late February.  Copies of the conference proceedings will be provided to
everyone attending.

There will also be unrefereed postgraduate  poster  sessions,  to  allow
postgraduates  to display information about their work. Those wishing to
provide a poster session should contact the programme chairman, no later
than  January  31st,  1985.  Authors  of  submitted  papers  will not be
eligible to provide poster sessions.

Notification and the three copies of full papers should be sent  to  the
Programme Committee chairman:

       Peter Ross,
       Department of Artificial Intelligence,
       Forrest Hill,
       Edinburgh EH1 2QL,  Scotland.

------------------------------

End of AIList Digest
********************

∂10-Aug-84  0045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #103    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Aug 84  00:44:29 PDT
Date: Thu  9 Aug 1984 23:32-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #103
To: AIList@SRI-AI


AIList Digest            Friday, 10 Aug 1984      Volume 2 : Issue 103

Today's Topics:
  AI Tools - Frame Languages & Burroughs LISP or Prolog & Concurrent LISP,
  Puzzles - The "Zebra" Problem,
  Games - Chess & Go,
  Poetry - Robots
----------------------------------------------------------------------

Date: 8 Aug 84 11:09:28 PDT (Wednesday)
From: Cornish.PA@XEROX.ARPA
Subject: Bibilography of frame-base representation languages

    Date: Sun, 29 Jul 84 22:01 EDT
    From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
    Subject: Frame-Based Languages

    I am investigating some implementation techniques for frame-base
    representation languages with inheritance.  ...


Tim's message, partially quoted above, toggled me into requesting the
members of the AIList realm for a bibliography of frame-base
representation languages.

Thank you very much,  Jan

------------------------------

Date: Thursday,  9-Aug-84 17:52:23-BST
From: LUIS HPS (on ERCC DEC-10) <lejm%edxa@ucl-cs.arpa>
Subject: AI Software for Burroughs machines, any one?

I face the the prospect of having to live with a Burroughs B6920
for the next year or so, does anyone know about any decent Lisp
implementations running on this machines and/or where to look for
them?

Actually, I would rather use Prolog, but I don't expect to find any
implementation running on Burroughs ...

Feel free to reply either to me or the list. Thanks,

        Luis Jenkins

[ Lejm%Edxa@Ucl-Cs                              ArpaNet ]

------------------------------

Date: 9 Aug 1984 10:05-PDT
From: chaudhry%USC-CSE@USC-ECL.ARPA
Subject: Parallelism in Lisp?


        I am currently doing some research for which  I need to
use parallelism in lisp. Does anyone know of any lisp dialect which
has a built in parallel construct, i.e.

             (CONCURRENT s-expression-1 ... s-expression-k)

or has anyone out there implemented such a function.

        Your help will be greatly appreciated.

                                Kashif Chaudhry
                                chaudhry%usc-cse@usc-ecl  [ARPA]

------------------------------

Date: Mon, 6 Aug 84 08:59 EDT
From: Hassan Aitkaci <Hassan%upenn.csnet@csnet-relay.arpa>
Subject: The Zebra Connection...

The Zebra puzzle was the object of a Prolog-Digest exchange about a
year and a half ago. Many solutions were proposed. Fernando Pereira of
SRI compiled a set of those for his own interest. The most interesting
(in my opinion) solution in Prolog was found by Hector Levesque of
Fairchild AI Lab, making a clever use of logical variables and the
unification process as an effective means to solve two-way constraint
propagations (i.e., a logical variable in Prolog has the behavior of
both a "synthesized" and "inherited" attribute, and unification
operates as the propagating mechanism). Hector's solution is given here
since I don't think it ever got posted on the Prolog Digest.

Those who are really intrigued by the method rather than the problem
which, by the way, happens to be a large-size, albeit simple, assignment
problem (complete state space has (5!)↑6 nodes, or 2,985,984,000,000
nodes if you prefer!) may be interested by an alternative solution
given in a language of my design in my dissertation, also reported in
the following paper:

        Hassan Ait-Kaci, "A New Model of Computation Based on a Calculus
        of Type Subsumption", Technical Report MS-CIS-83-40, Department
        of Computer and Information Science, Univ. of Pennsylvania,
        Philadelphia, PA 19104.

Prolog solutions may be obtained from the Prolog Digest Archives from
the editor Chuck Restivo from Stanford University. Now, please, stop
losing sleep from so much coffee -- or you may turn into a greenish
looking samourai riding a bucking zebra!

                                Hassan Ait-Kaci
                                Hassan%Upenn@Csnet-relay

/*********************************************************************
                Hector Levesque's Solution to the Zebra Puzzle
 *********************************************************************/
:- op(500,xfy,[has←left←neighbor,is←right←of,lives←next←to,is←not]).

rightmost←occupant has←left←neighbor midright←occupant.
midright←occupant has←left←neighbor middle←occupant.
middle←occupant has←left←neighbor midleft←occupant.
midleft←occupant has←left←neighbor leftmost←occupant.

X lives←next←to Y :- X has←left←neighbor Y.
X lives←next←to Y :- Y has←left←neighbor X.

X is←right←of Y :- X has←left←neighbor Y.
X is←right←of Y :- X has←left←neighbor Z, Z is←right←of Y.

X is←not Y :- X is←right←of Y.
X is←not Y :- Y is←right←of X.

differ(X1,X2,X3,X4,X5) :-
       X1 is←not X2, X1 is←not X3, X1 is←not X4, X1 is←not X5,
       X2 is←not X3, X2 is←not X4, X2 is←not X5,
       X3 is←not X4, X3 is←not X5,
       X4 is←not X5.

?- Englishman = RedHouser,
   Spaniard = DogOwner,
   CoffeeDrinker = GreenHouser,
   Ukranian = TeaDrinker,
   GreenHouser has←left←neighbor IvoryHouser,
   WinstonSmoker = SnailOwner,
   KoolSmoker = YellowHouser,
   MilkDrinker = middle←occupant,
   Norwegian = leftmost←occupant,
   ChesterfieldSmoker lives←next←to FoxOwner,
   KoolSmoker lives←next←to HorseOwner,
   LuckyStrikeSmoker = OJDrinker,
   Japanese = ParliamentSmoker,
   Norwegian lives←next←to BlueHouser,
   differ(GreenHouser,YellowHouser,RedHouser,IvoryHouser,BlueHouser),
   differ(ZebraOwner,FoxOwner,HorseOwner,SnailOwner,DogOwner),
   differ(OJDrinker,MilkDrinker,TeaDrinker,CoffeeDrinker,WaterDrinker),
   differ(Englishman,Spaniard,Norwegian,Japanese,Ukranian),
   differ(KoolSmoker,WinstonSmoker,ParliamentSmoker,LuckyStrikeSmoker,
          ChesterfieldSmoker).

/*********************************************************************
        To solve the puzzle, load this program... and wait!
        It takes about 45 minutes when interpreted by UNH Prolog
        on our (overloaded) VAX/780...
 *********************************************************************/

------------------------------

Date: Tuesday,  7 Aug 1984 08:49-EDT
From: bac@Mitre-Bedford
Subject: The "Zebra Problem"


   It seems to me that there was a bug either in the statement of
the Zebra puzzle, or its following solution.  Constraint #6 in the
problem stated that "The green house is immediately to the RIGHT
(your right) of the ivory house."  However, the "solution" was
worked on the basis of the green house being to the LEFT of the
ivory house.  Note that this did not change the ultimate solution;
the Norwegian still drank the water, and the Japanese had the zebra.
However, if the problem was worked as stated, eventually one reached
a point requiring a binary decision: house 3 could be either red or
ivory, house 4 either green or ivory, and house 5 either red or green.
At this point, one had to make a guess about the color of any one of
the last three houses, and explore the remaining tree for a contra-
diction.  Using the constraints implied by the "solution," with green
to the left, the problem dropped out quite naturally, and involved
no analysis or backtracking.

   So, what is the correct statement of the problem?  Should one be
able to solve such a problem using only deduction, or should analysis
be necessary?

   An interesting question (to me, anyway) is, are there any theories
abounding that relate the number of variables in such a problem to
the number of contraints that must be applied to uniquely describe the
situation?


                                                Brant Cheikes

                                        ARPA, CSNET: bac@Mitre-Bedford
                                        UUCP: ...linus!bccvax!bac

------------------------------

Date: 8 Aug 84 23:09-PDT
From: mclure @ Sri-Unix.arpa
Subject: number-cruncher vs. humans: 6th move

The Vote Tally
--------------
Folks, the moves are in and have been tallied.
The winner is: 5 ... Nf6
The runner-up was 5 ... e5
We had a narrow mix of moves.

A total of 18 votes were cast. Please relay this message to any
friends you have who might be interested in participating.

The Machine Moves
-----------------
The Prestige 8-ply replied 6. Re1 from book in 0 seconds.

                Humans                    Move   # Votes
        BR ** -- BQ BK BB -- BR         5 ... Nf6   10
        BP BP ** BB BP BP BP BP         5 ... e5     4
        -- ** BN BP -- BN -- **         5 ... a6     2
        ** WB BP -- ** -- ** --         5 ... Ne5    1
        -- ** -- ** WP ** -- **         5 ... e6     1
        ** -- WP -- ** WN ** --
        WP WP -- WP -- WP WP WP
        WR WN WB WQ WR -- WK --
             Prestige 8-ply

The Game So Far
---------------
1. e4    c5
2. Nf3   d6
3. Bb5+  Nc6
4. o-o   Bd7
5. c3    Nf6
6. Re1   ???

Commentary
----------
Steve Swernofsky <SASW @ MIT-MC>,  unrated, wrote the majority opinion.

    Since White threatens 6 d4 and there doesn't seem to be much we can do
    about it, I propose this move to counter in the center.  After 6 Re1 d5
    (7 e5 Nxe5 wins a pawn) White must either postpone his advance, 7 d3,
    or else allow us to isolate his QP, 7 ed Nxd5 8 d4 cd (9 Nxd4 either
    leads to an isolated QP or no QP at all).  Note we can't play 6 ... e6
    since 7 d4 d5 ultimately loses us a pawn due to the K-file pin.

Solicitation
------------
    Your move, please?

        Replies to Arpanet: mclure@sri-unix or Usenet: sri-unix!mclure.
        DO NOT SEND REPLIES TO THE ENTIRE LIST! Just send them to one of
        the above addresses.

------------------------------

Date: 7 August 1984 08:05-EDT
From: Robert Elton Maas <REM @ MIT-MC>
Subject: Delphi Experiment: group play against machine -> just people

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

I'd be more interested in a delphi experiment with Go instead of
Chess. Pick some starting position (probably not start of game, there
are too many good ways to play the fuseki) and see if we can converge
on the optimum way for both sides to play through to the end. Allow
backtracking at any time, thus if you suddenly see where one side made
a mistake you can change your vote at that point. If changed vote(s)
cause an alternate branch to have largest vote, the experiment shifts
to explore that branch instead of the one that had largest vote
before. Either allow everyone to vote for both black and white moves,
or divide the membership into two teams and have them select only
their own moves not opponents.

Note that my method doesn't require a go-playing program/machine to
play one side of the game.

To speed up the experiment, allow a voter to specify a whole sequence
of moves in advance, contingent on the opponent choosing the same move
as in the sequence. (For example: now I move ..., if he replies ...
then I conterreply ..., etc.; abbreviated of course.) So long as the
first move agrees with the voted move and the reply agrees with the
voted reply then the next move will be counted as a vote.

------------------------------

Date: 9 Aug 1984 08:02:05-EDT
From: kushnier@NADC
Subject: A Prediction


I predict in three short years
Robots you will see
Being sold at K-Mart,
Sears and Kleins,
And the Macy's Company.

As common as a toaster,
Inexpensive and complete,
The robots of the next few years..
An appliance quite unique


The Fifties
     By Ron Kushnier

I remember "Robbie"
And "Tobor" and his friends
In the movies of the Fifties
And on the Late Shows
Seen Again.

These Science Fiction classics
Showed the future now come true
A world of shining robots
to serve me and to serve you.

------------------------------

End of AIList Digest
********************

∂12-Aug-84  1928	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #104    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Aug 84  19:27:45 PDT
Date: Sun 12 Aug 1984 18:07-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #104
To: AIList@SRI-AI


AIList Digest            Monday, 13 Aug 1984      Volume 2 : Issue 104

Today's Topics:
  Hardware - Cellular Automata,
  LISP - Interlisp/Zetalisp Compatibility,
  Applications - Computerized Conferencing,
  Robotics - Dogs/Ants & Underwater Robots,
  Expert System - Construction Kit,
  Reports - Stanford Math/CS Library,
  Project Report - AI/Speech Research at Edinburgh
----------------------------------------------------------------------

Date: 9 Aug 84 15:40:58-PDT (Thu)
From: hplabs!tektronix!tekchips!mock @ Ucb-Vax.arpa
Subject: Hardware Implementations of Cellular Automata
Article-I.D.: tekchips.1014

I'm looking for information concerning hardware implementations of 2d
cellular automata.  Specifically, do implementations tend to be just
`life' rules or are they the more general case, and what sort of
speed/resolution statistics have been achieved?  I would appreciate any
sort of information about particular implementations.

Jeff Mock

tektronix!tekchips!mock

------------------------------

Date: Fri 10 Aug 84 09:17:25-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Re: Hardware Implementations of Cellular Automata

The Golay processor has been around for about two decades; it's a
hard-wired hexagonal processor performing logical operations on boolean
image data.  I believe that various medical image processing
systems offer shrink/expand cycles for image overlays that can be
similarly programmed, and there are both software languages and
parallel-processor projects aimed at permitting easy specification
of parallel local operations on image arrays.

                                        -- Ken Laws

------------------------------

Date: Thursday, 9 August 1984, 22:01-EDT
From: Robert P. Krajewski <RpK%MIT-OZ@MIT-MC.ARPA>
Subject: Interlisp/Zetalisp Compatibility

Since I have only a limited knowledge of Interlisp, this will be sketchy at
best.

(1) All arguments to functions in Interlisp are optional.  In ZetaLisp (Common
Lisp), optional arguments are specified in the function definition with special
keywords.

(2) The NTH function is different.  (Rather miscellaneous, eh ?)

(3) Interlisp does not have a package system.  (In this respect, it is like
MacLisp and Franz Lisp.)  The modern MacLisp descendants (ZetaLisp and NIL, and
the Common Lisp dialects) contain such a system, which is basically a way of
managing the namespace of symbols with ``packages,'' the equivalent of old
obarrays.  To give a quick example, suppose one has a symbol in the CELLOPHANE
package.  From within that package, one can refer to a symbol called WRAP in
that package by WRAP, and refer to outside as CELLOPHANE:WRAP.  It can get much
more hairy than this (what about global symbols like CONS ?), but that's the
general idea.

(4) Both Interlisp (in the workstatation implementations) and Zetalisp have
facilities for processes and windows, but they are obviously handled in
different ways.  The same is true for hash tables and macros.

(5) Zetalisp does not have a DWIM facility; the error handler is powerful
enough to enable the user to recover intelligently from errors while being
informed of the condition.  In keeping with the MacLisp tradition, the basic
unit for organising source code is the file, and a display editor that is
highly integrated with the Lisp environment is used.  In contrast, structure
editors are used in Interlisp to edit code.

I hope somebody more familiar with Interlisp can answer your question.  In
terms of compatibility, there doesn't seem to be much of it between the two
dialects..

------------------------------

Date: Thu, 09 Aug 84 16:54:52 EDT
From: "Martin R. Lyons" <991@NJIT-EIES.MAILNET>
Subject: Computerized Conferencing and AI


     In response to Liam Bannon's (bannon@nprdc) message of July 21:

     We at the Computerized Conferencing and Communications Center of
New Jersey Institute of Technology operate the Electronic Information
Exchange System (EIES).  Over our six years of operation, we have
studied how people interact not only with the system, but with others,
and how the two modes differ.  In most instances, it has been found
that Computer Conferencing increases productivity and creativity, by
allowing not only ongoing discussion, but also 24 hour access.

     Over the past few months, I have begun looking into the
feasibility of integrating an AI subsystem into our most recent
effort, EIES II, a new, improved version of EIES.  At this point, the
design is such to provide user aid, to field questions (English text)
from the user, and try to answer those as intelligently as possible.
Most of our questions to the online consultants here take the form of
'Where do I find the conference on IBM PCs?', etc.

     I have available two lists: the first is the research
reports available from CCCC, and the second major works by Murray
Turoff and Starr Roxanne Hiltz, two of the original designers of EIES.
S. R. Hiltz is also a sociologist and along with Elaine Kerr has
carried out extensive research in user interactions and usage.  In
order to spare the list the 300+ lines of references, I am not
including them here.  If you would like a copy drop me a message and
I'll send them to you.  Please feel free to contact me here if I can
be of any help.


 MAILNET: Marty@NJIT-EIES.Mailnet
 ARPA:    Marty%NJIT-EIES.Mailnet@MIT-MULTICS.ARPA
 USPS:    Marty Lyons, CCCC/EIES @ New Jersey Institute of Technology,
          323 High St., Newark, NJ 07102    (201) 596-EIES

------------------------------

Date: 7 Aug 84 18:20:00-PDT (Tue)
From: pur-ee!uiucdcs!uicsl!mihran @ Ucb-Vax.arpa
Subject: Re: Robotics - (nf)
Article-I.D.: uicsl.12300002

Forget about the dogs. Even an ant is quite sophisticated compared to
the capabilities we want the robots to have, at least in the near
future. I was watching one of these nature shows on PBS the other day
which was showing the behaviour of the ants underground. I would guess
that if the technology develops to the level of sophistication that
allows us to implement the sensory-motor coordination that these ants
have, our robots will probably be more than adequate to perform with
great skill the necessary tasks at an assembly line or at a dangerous
mine.

------------------------------

Date: 6 Aug 1984 13:31:56-EDT
From: Chuck.Thorpe at CMU-CS-IUS
Subject: Underwater Robots

          [Forwarded from the CMU-C bboard by Laws@SRI-AI.]
 
The Mobile Robot Lab has just set a new record for proven depth capability
for a CMU-built submersible robot.  Neptune, running under an umbrella taped
to its camera mast, operated successfully at depths up to .1" during the
recent rain storm.

------------------------------

Date: 7 Aug 84 14:49:23-PDT (Tue)
From: 
Subject: expert system construction kit
Article-I.D.: uvicctr.501


     E X P E R T    S Y S T E M    C O N S T R U C T I O N    K I T


     The knowledge independent LISP-based expert system called PORTAL
is now available for distribution from the Laboratory for Computer
Enhanced Cognition, University of Victoria.

     It is a simple rule-based system, with a forward inference
mechanism, and includes supporting utilities for a rule and entity
editor as well as some analytical tools for validating your knowledge
base.

     The system has been written in Franz LISP under UNIX 4.1 BSD, and
can be obtained on a non-commercial, non-disclosure, as-is basis for a
nominal fee.  The distributed version includes source and executable
code, and a MAKE system for implementation.

     We are offering this system in the interests of making a simple
but complete expert system construction tool available to research
laboratories, so that the difficult problems of knowledge acquisition
can be attacked on a broad front.

     The fee is $200.00 for universities, and $500.00 for other
laboratories.  Interested parties should send a purchase order for the
amount indicated, or call.




Ernie Chang

Laboratory for Computer Enhanced Cognition             August 1, 1984
Department of Computer Science
University of Victoria
Victoria, B.C. V8W 2Y2
Canada

604-721-7232 (7233)                     ...uw-beaver!uvicctr!echang
                                        ...ubc-vision!uvicctr!echang


References:

1. Chang, EJH, McNeely M, Gamble K. An Expert System for Liver Function
   Test. Proc. 4th Jerusalem Conference on Information Technology, 1984.

2. Chang, EJH, McNeely M, Gamble K. Strategies for Choosing the Next
   Test in an Expert System. Proc. American Association for Medical
   Systems and Informatics Conference, San Francisco, May 1984.

------------------------------

Date: Thu 9 Aug 84 23:34:13-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Latest Math & CS Library "New Reports List" posted on-line.

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The latest Math & Computer Science Library "New Reports List" has been
posted on-line.  The file is "<LIBRARY>NEWTRS" at SCORE, "NEWTRS[LIB,DOC]"
at SAIL, "<CSD-REPORTS>NEWTRS" at SUMEX, and "<LIBRARY>NEWTRS" at SIERRA.
In case you miss a reports list, the old lists are being copied to
"<LIBRARY>OLDTRS" at SCORE and "<LIBRARY>OLDTRS" at SIERRA where they will
be saved for about six months.

If you want to see any of the reports listed in the "New Reports List,"
either come by the library during the display period mentioned or send a
message to LIBRARY at SCORE, giving your departmental address and the
six-digit accession numbers of the reports you want to see, and we will
check them out in your name and send them to you as soon as they are available.

The library receives technical reports from over a hundred universities
and other institutions.  The current batch includes - among others -
reports from:


      Carnegie-Mellon University. Department of Computer Science.
      Carnegie-Mellon University. Robotics Institute.
      Harvard University. Center for Research in Computing Technology.
      IBM. Research Division.
      Institut National de Recherche en Informatique et Automatique (INRIA).
      Mathematisch Centrum (Amsterdam).
      U.K. National Physical Laboratory. Division of Information Technology
        and Computing.
      Universitaet Hamburg. Institut fuer Informatik.
      Universitaet Karlsruhe. Institut fuer Informatik.
      University of Illinois at Urbana-Champaign. Department of Computer
        Science.
      University of Wisconsin-Madison. Department of Computer Science.


                                        - Richard Manuck
                                          Math & Computer Science Library
                                          Building 380 - 4th Floor
                                          LIBRARY at SCORE

------------------------------

Date: Thursday,  2-Aug-84 16:23:21-BST
From: HENRY T HPS (on ERCC DEC-10) <hthompson%edxa@ucl-cs.arpa>
Subject: Project Report - AI/Speech Research at Edinburgh

                       [Edited by Laws@SRI-AI.]

A major 5-year research and development project in speech recognition is to
start at the University of Edinburgh in October 1984 under the direction of
Dr John Laver and Dr Henry Thompson, in conjunction with members of the
Departments of Artificial Intelligence, Electrical Engineering and
Linguistics.

The goal of the project is a machine assisted speech transcription system -
a text input device starting from spoken input, and depending on incremental
interactions between user and system to develop a final text.

Computing resources will include 2 VAX 11/750s running UNIX(TM Bell
Laboratories) and a network of Xerox 1108s running Interlisp-D.  The eventual
target is the Alice parallel reduction machine.

In addition to the ten people already involved, seventeen new
positions are available.  For further information, write Dr J. Laver,
Centre for Speech Technology Research, Department of Linguistics, Adam
Ferguson Building, George Square, Edinburgh EH8 9LL, SCOTLAND, or call
031 667-1011 x6380.

------------------------------

End of AIList Digest
********************

∂14-Aug-84  2357	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #105    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 Aug 84  23:56:26 PDT
Date: Tue 14 Aug 1984 22:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #105
To: AIList@SRI-AI


AIList Digest           Wednesday, 15 Aug 1984    Volume 2 : Issue 105

Today's Topics:
  Workshop - AI and Dataflow Machines,
  Robotics - Ping-Pong Competition & Discussion List,
  Literature - Looking for COLING-76,
  Brain Theory - Language and EEGs,
  Natural Language - On Having Virtually No Crime Rate,
  Anecdote - Fuzzy Cat,
  Philosophy - Cause and Effect,
  Conference - 1985 Symposium on Logic Programming
----------------------------------------------------------------------

Date: Tue, 14 Aug 84 08:18:26 pdt
From: Stanley Lanning <lanning@lll-crg.ARPA>
Subject: AI and DataFlow machines -- a request for interested parties

A friend asked me to post this on the bboard.... -smL

 From DEBONI@Ames-VMSB Mon Aug 13 16:49:54 1984

 To All Interested Parties: AI AND DATAFLOW

 There will be a cooperative project in modelling the performance of dataflow
 systems, invloving personell from MIT, at Nasa Ames Research Center, Moffett
 Field, CA, the last two weeks of September. Interested participants are sought
 from the AI community who have either general algorithms or specific applica-
 tions they would like to see run on such systems. Indications of interest or
 queries for further information should be addressed to "DEBONI@AMES-VMSB".

------------------------------

Date: Mon 13 Aug 84 12:14:25-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Robot Ping-Pong Competition Announced

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

    The current issue of Robotics Age has an announcement for a robot ping-pong
competition to be held in England in 1985.  The rules are set up to encourage
low-cost entries; there are upper limits on size and power of the mechanism
and the visual environment is so defined as to provide high contrast under
uniform illumination.  You don't even have to acquire the image of the ball;
it is served from a known location above the net and only has to be tracked.

------------------------------

Date: 9 Aug 84 5:38:25-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!whuxle!spuxll!abnjh!estate @ Ucb-Vax.arpa
Subject: Speaking of Robotics...
Article-I.D.: abnjh.788

Would anyone be interested in starting a newsgroup for amateur robotics ?
I haven't got the slightest idea how to go about starting a new newsgroup,
but I've seen quite a few interesting articles and ideas on the net
concerning robotics.  To my knowledge their are only a very few periodicals
dealing with the subject of amateur robotics and robotics experiments, but
I have found a few interesting articles on simple robotic interfaces that
will work off a home computer (one of which I built).  If anyone else out
there in net-land enjoys tearing apart their household appliances and
and reconstructing them to do things that they were never meant to
do, please let me know and we'll see if we can get anything rolling!


(Visions From The Orcrest Stone)

Carl D.

------------------------------

Date: Fri, 3 Aug 84 22:47:15 EDT
From: Steven Lytinen <Lytinen@YALE.ARPA>
Subject: COLING-76


I am interested in seeing several articles in COLING-76.  Unfortunately,
I can't find a copy anywhere.  Does anyone have a copy that they would
be willing to lend me?

Steve Lytinen (lytinen@yale)

------------------------------

Date: 13 Aug 1984 14:21:24-EDT
From: sde@Mitre-Bedford
Subject: Warfian hypothesis, sort of

On a different net it was alleged, though w/o references to check,
that people, both of Japanese and British genetic background, when
raised speaking Japanese show EEG responses to music on the LEFT
side, but that when raised speaking English, their responses are
on the RIGHT side. This has some interesting implications, if true.
Does anyone out there know anything more about such a phenomenon?
Although no references were cited, the description was detailed
enough to sound like it had a basis in fact.
As a possibly related matter, I seem to recall reading somewhere
that musicians, or some subset of them, also show EEG responses
to music on the LEFT side.
Being quite curious about both, I'd love to get more information,
if anyone can assist.
                                  Thanx in advance,
                                  David   sde@mitre-bedford

------------------------------

Date: 13 Aug 84 14:48:35-PDT (Mon)
From: ihnp4!houxm!hou2d!wbp @ Ucb-Vax.arpa
Subject: On having virtually no crime rate.
Article-I.D.: hou2d.472

        "Saudi Arabia has virtually no crime rate," is what the commercial
told me about 30 times before I realized what they are really saying.
        I understand what having virtually no crime is, and also a very
low crime rate is within my grasp.  But virtually no crime rate is a very
odd construction.
        If a place has no crime rate then this means that the statistics
are not gathered and that's O.K too.
        If the crime rate is virtually non-existent then it indeed exists,
but is in a state of "almost non-being" which may mean that for all practical
purposes it does not exist, but is known to a select few who will tell
no-one.  (Or may be a reflection of their different system of justice!)

        Are virtual rates calculated on virtual machines, and does one
need either transcendental or imaginary numbers to express them?

        Seriously, what would a program do with such a sentence?
And even more interesting, would a sophisticated program have any
problem with it, and could it not even see a problem with it as I am
sure millons of people did not see one!
                                Submitted for your approval,
                                Wayne Pineault (hou2d!wbp)

------------------------------

Date: 12 Aug 84 10:53:16-PDT (Sun)
From: ihnp4!houxm!houxz!vax135!ariel!norm @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: ariel.716

>  Ahem.  Cause and effect may exist, and indeed, in order to function as
>  human beings, we seem to need to behave as if it exists, but I don't
>  think the principal of cause and effect can be *proved* to exist.  The
>  association of two events in time does not imply a connection between
>  the two.
>
>  (For a more detailed argument, read Hume and Kant)
>
>  --Ray Chen

The concept of proof depends upon the concepts of cause and effect, among
other things.  Even the ideas "anything" and "functioning" depend upon
the idea of cause and effect.  All of these concepts depend on or are
rooted in the concepts of identity and identification.  Here's why:

To be is to be something in particular, to have a specific identity, or
having specific characteristics.  What does it mean to have specific
characteristics or a specific identity?  It means that in a particular
context, the entity's existence is manifested in a particular way.  An
entity IS what it can DO (in a given context).

So what's causality?  The law of identity applied to action.  Things do
what they do, in any given context, BECAUSE they are what they are.
"What they are" includes or consists of "what they can do".
This is true irrespective of our ability to identify what they are.

Hume's and Kant's arguements re causality are the analytic-synthetic
dichotomy.  For the original presentation of the views that smash
this false dichotomy, see Leonard Peikoff's article "The Analytic-
Synthetic Dichotomy" in the back of recent editions of Ayn Rand's
"Introduction to Objectivist Epistemology".  For the epistemological
basis of Peikoff's article, read Rand's Intro.


(I almost posted this to net.cooks, but GOOD cooks know this already...)

-Norm Andrews, AT+T Information Systems, (201) 834-3685

------------------------------

Date: Mon, 13 Aug 1984  18:51 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: Expert Systems, Fuzzy Logic, and Fuzzy Batteries

I had a Fuzzy cat named Zada once.  (This is really the truth.)  He
was named after Lotfi, of course.

Fanya

------------------------------

Date: 8 Aug 84 12:43:00-PDT (Wed)
From: hplabs!hp-pcd!uoregon!conery @ Ucb-Vax.arpa
Subject: 1985 Symposium on Logic Programming
Article-I.D.: uoregon.30100002

                        -- CALL FOR PAPERS --

                1985 Symposium on Logic Programming

              Boston, Massachusetts, July 15-18, 1985

     Sponsored by IEEE and its Technical Committee on Computer Languages

The symposium will cover fundamental principles and important innovations in
the design, definition, and implementation of logic programming systems and
applications.  Of special interest are papers related to parallel processing.
Other topics of interest are (but are not limited to) FGCS, distributed control
schemes, expert systems, natural language processing, systems programming,
novel implementation techniques, and performance issues.

Authors should send 8 copies of their manuscript, plus an extra copy of the
abstract, to:

                John Conery
                Department of Computer and Information Science
                University of Oregon
                Eugene, OR   97403

Paper length should be 8-20 typed, double spaced pages, including figures and
abstract.  Submissions will be considered on the basis of appropriateness,
clarity, originality, significance, and overall quality.

Deadline for submission of papers is November 16, 1984.  Authors will be
notified of acceptance or rejection by March 8, 1985, and camera ready copy
must be returned by May 10, 1985.  Authors of accepted papers will be expected
to sign a copyright release form.

** Proposals for full or partial day tutorials are also being solicited.
Send a one to three page proposal to John Conery by November 16.

Conference Chairman             Technical Committee Co-Chairmen

Doug DeGroot            Jacques Cohen           John Conery
T.J. Watson Res. Ctr.   Computer Sci. Dept.     Dept. of Computer
PO Box 218              Ford Hall                  and Information Science
Yorktown Heights,       Brandeis University     University of Oregon
   NY 10598             415 South St.           Eugene, OR 97403
(914)945-3497           Waltham, MA  02254      (503)686-4408
                        (617)647-3370

        CSNET:          jc@brandeis             jc@uoregon


                        Technical Committee

        Ken Bowen (Syracuse)            Jack Minker (Maryland)
        Jacques Cohen (Brandeis)        Fernando Pereira (SRI)
        John Conery (Oregon)            Alan Robinson (Syracuse)
        Doug DeGroot (IBM Yorktown)     Sten-Ake Tarnlund (IBM Yorktown)
        Seif Haridi (IBM Yorktown)      D. S. Warren (Stony Brook)
        Bob Keller (Utah)               Jim Weiner (New Hampshire)
        Gary Lindstrom (Utah)

------------------------------

End of AIList Digest
********************

∂19-Aug-84  1854	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #106    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Aug 84  18:53:15 PDT
Date: Sun 19 Aug 1984 17:29-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #106
To: AIList@SRI-AI


AIList Digest            Sunday, 19 Aug 1984      Volume 2 : Issue 106

Today's Topics:
  AI & Society - Misrepresentations of AI,
  AI Tools - Taxonomy Assistant, Fuzzy Operational Research,
  Fifth Generation - Budget Cuts,
  Abstracts - Natural Language Programming,
  Natural Language - Crime Rate,
  User Interface - The Ebstein Test
----------------------------------------------------------------------

Date: Tue 14 Aug 84 22:08:59-EDT
From: MM%MIT-OZ@MIT-MC.ARPA
Subject: Misrepresentations of AI

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

I am looking for examples of outrageous representations of the current state of
AI appearing in any popular newspapers or magazines. If you know of any, I'd
appreciate hearing them.

                                    Melanie Mitchell

------------------------------

Date: 16 Aug 1984 15:57:11 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: Taxonomy Assistant

I would like to have some sort of computational aid for creating taxonomies.

In trying to understand a collection of objects or data, often one of
the most helpful things to do is to create a taxonomy of it.  Comparing
and classifying things makes one think about their attributes and how
they relate.  It also helps identify potential varieties of objects that
are "missing."

Often several attempts are required before a satisfactory result is
achieved, which can involve a lot of bookkeeping and an overwhelming
amount of detail, so much that significant patterns are missed.

Also, there are skills for doing taxonomies, and I don't have them all.

For all these reasons, it would be good to embed a lot of the support
operations for creating a taxonomy in a program, one that would let the
machine do bookkeeping, systematic evocation of data, consistency
checking and some pattern identification, but still leave me in charge.
(Perhaps it's already been done.)

What sorts of tools are out there?  Is this already embedded in some
collection of intellectual prosthetics?  Where should I look for such
programs?

Bill Mann

[There are indeed tools for creating numerical taxonomies --- see the
documentation for cluster analysis programs in statistical packages
such as BMD, SPSS, SAS, etc.  For other leads I would suggest the
Pattern Recognition journal, the seven (massive) IEEE conferences on
pattern recognition, and the Classification Society (c/o Dr. George
W. Furnas, Room 2C-572, Bell Communications Research, Inc., Murray
Hill, NJ 07974).  Can anyone suggest available software for nonnumeric
taxonomy construction or for handling the associated bookkeeping?  -- KIL]

------------------------------

Date: 15 Aug 84 15:44:00-PDT (Wed)
From: pur-ee!uiucdcs!uiucuxc!chandra @ Ucb-Vax.arpa
Subject: Fuzzy reasoning and AI in Operational Research and Management
         Science.
Article-I.D.: uiucuxc.28900005


         I am a graduate student who is trying to apply
expert systems to managerial Decision Making . My thesis deals with applying
knowledge based techniques to decision support systems which use mathematical
models and Operations Research techniques.

        I would like to know if anybody is aware of some good references on
AI applications in Operations Research / Decision making . Any Symposia,
meetings, books etc.

        I would really appreciate your help, thank you.

                                                Navin Chandra

                                                Phone 1-800-872-2375
                                                      (ask for extention 413)

------------------------------

Date: 15 Aug 84  0418 PDT
From: Laws@SRI-AI
Subject: Japan's Fifth Generation Project

The following summary is from a NYT article, Japan Appears To Falter
Attempting To Create New Computer, by Andrew Pollack.

Kazuhiro Fuchi, research director of the 5th generation project,
says that the project budget has been drastically cut back on goals
of vision, speech understanding, and natural language translation.
The core program -- including reasoning, understanding written language,
and intelligent programming -- has been preserved.

The first phase, development of a database machine and a sequential
reasoning engine, has been completed on schedule.  Funding has been
cut by 50% for the second phase, including development of a parallel
machine and research in perception and man-machine interfaces.  Project
officials claim that: it is only natural the original vague goals would
be narrowed; the original funding proposals were overly generous; and
several private companies are researching the cut problems on their
own, so there is little need for the institute to work on them.
Finding qualified researchers was more of a problem than budget
limitations anyway.  ("I personally felt it was rather difficult to
spend 100 billion yen," Fuchi said.)

On the other hand, there will now be less opportunity for exploring
different approaches.  The third phase may be jeapardized if dead
ends are encountered in the second phase.

The Japanese government is trying to reduce a large deficit by
eliminating spending increases in all areas except defense and foreign
aid. If no exception is made for advanced technology, the ministry
would have to make cuts elsewhere to increase the budget of the Fifth
Generation project or find new sources of revenue.  The ministry has
continued to give the project favorable budget treatment, even though
the agency's overall budget for high technology has dropped 20 percent
during the last three years.

One way to increase the budget is to ask industry to provide money.
Eight computer companies are providing researchers to the project and
are building machines for it, but they are not eager to provide
speculative, long-term research funds.  Fuchi also is concerned that
corporate funding would limit the project's freedom to pursue its own
goals.

    The institute wants to increase its staff from 50 researchers to
    100 next year. But artificial intelligence reseachers are rare in
    Japan, and the companies are reluctant to part with more.  "The
    Fifth Generation will produce technology for the 1990s," said
    one official of Fujitsu Ltd., Japan's largest computer company.
    "But we need products for our customers before the 1990s."


                                        -- Ken Laws

------------------------------

Date: Fri 17 Aug 84 18:01:05-PDT
From: Kenji Sugiyama <SUGIYAMA@SRI-AI.ARPA>
Subject: Abstracts - Natural Language Programming

Here are two abstracts of the papers that concern with a Natural Language
Programming System under development in Fujitsu Laboratories in Japan.

* "Understanding of Japanese in an Interactive Programming System"
   by Kenji Sugiyama, Masayuki Kameda, Kouji Akiyama & Akifumi Makinouchi
   in COLING84 (10th International Conference on Computational Linguistics).

        Abstract:  KIPS is an automatic programming system which generates
standrdized business application programs through interactive natural
language dialogue.  KIPS models the program under discussion and the
content of the user's statements as organizations of dynamic objects
in the object-oriented programming sense.  This paper describes the
statement-model and the program-model, their use in understanding Japanese
program specifications, and how they are shaped by the linguistic
singularities of Japanese input sentences.

* "An Experimental Interactive Natural Language Programming System"
   by Kenji Sugiyama, Kouji Akiyama, Masayuki Kameda & Akifumi Makinouchi
   to appear in Electronics and Communications in Japan which is published
        by Scripta Technical, Inc. (Silver Spring, MD 20910) in cooperation
        with IECEJ (the Institute of Electronics and Communication Engineers
        of Japan, Tokyo 105).

        Abstract:  This paper discusses the problems encountered in the
development of the interactive natural language programming system (KIPS)
from three aspects, which are input sentence, target program and communication
between the user and the system.  Based on the recognitin of the problems,
an interactive natural language programming system is proposed, which is
constructed on a model of the task domain consisting of active objects
in the object-oriented programming sense.  The proposedd system is composed
of four modules, which are parser, specification acquisitor, coder and
user interface.  Those modules realize the functions of information
extraction from Japanese sentence, assimilation of fragmentary informations,
automatic programming and man-machine interface, respectively.  Lastly,
future development of the system is discussed.


Contact point is as follows:

        Kenji Sugiyama

        (until Sept. 5 and thereabout)
        SRI International, BS253
        Menlo Park, CA 94025
        (415)852-4402
        Sugiyama@SRI-AI

        (afterwards)
        Software Laboratory, Fujitsu Laboratories Ltd.
        Kawasaki-shi, 211 Japan
        (044)777-1111

------------------------------

Date: 15 Aug 84 10:46:31-PDT (Wed)
From: hplabs!hao!seismo!rochester!rocksvax!ritcv!ccivax!abh @ Ucb-Vax.arpa
Subject: Re: On having virtually no crime rate.
Article-I.D.: ccivax.195

This kind of sentence structure is highly dependent upon
perspective and context. If a problem is found on first parse
perhaps a simple substitution by synonym would do the trick.
In this case substituting 'nearly' for 'virtually' would do the trick.
Contextually, though, the program would have to know that rates
are for numerical comparison. In which case one of the better
semantic results might be "nearly no crime rate in comparison."
The reasons for which people interpret the same written words
would be an interesting endeavor.

                                Andrew Hudson
--
"Freedom of choice is what you got
 Freedom from choice is what you want"
                         - DEVO
        ...[rlgvax | decvax | ucbvax!allegra]!rochester!ritcv!ccivax!abh

------------------------------

Date: 20 Aug 84 7:44:08-EDT (Mon)
From: ihnp4!drutx!houxe!hogpc!houti!ariel!vax135!ukc!west44!westcsr!pkelly
      @ Ucb-Vax.arpa
Subject: The Ebstein Test, or DHSS loses to AI.
Article-I.D.: westcsr.169

I read in today's Guardian newspaper of an unpublished report describing
an experiment performed at British Department of Health and Social
Security offices (D.H.S.S. or colloquially 'the SS'). People
claiming social security benefit often queue for literally hours
to see DHSS staff about what benefits they're entitled to.

   In a small selection of offices computers were installed
which people could choose to use instead of queueing to see a
person. The machines took about half an hour to do a
consultation, and produced an extensive print-out at the end.
Finally, the clients were questionned on their experience.

   Despite often never having communicated with a computer
before, 85 percent said they found the machines a better source
of information than DHSS staff.

   The mechanised interview took a bit longer than talking to a
human, but as the report's author, Joyce Ebstein, concludes,
"What the professionals don't appreciate is that people don't
object to long periods of service and attention. What they do
object to is long periods of waiting for service and attention".

   Of course, this is no evidence to support wholesale
redundancies - it simply underlines the abominable service being
provided.

   But it does bring to mind an alternative to the Turing test,
in which it is unimportant whether users can distinguish between
a machine and a human. What counts is which they prefer.

   The test no longer defines Artificial Intelligence, but
perhaps it makes a more sensible objective for artificial
intelligence research.

        Yours, Paul Kelly, Westfield College, Univ. of London.

(..vax135!ukc!west44!westcsr)

------------------------------

End of AIList Digest
********************

∂19-Aug-84  1951	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #107    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Aug 84  19:49:30 PDT
Date: Sun 19 Aug 1984 17:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #107
To: AIList@SRI-AI


AIList Digest            Monday, 20 Aug 1984      Volume 2 : Issue 107

Today's Topics:
  LISP - Interlisp/Zetalisp Compatibility & Charniak & Common Lisp,
  Brain Theory - PET Experiments,
  Philosophy - Causality and Proof & AI,
  Project Reports - UPenn
----------------------------------------------------------------------

Date: 15 Aug 84 10:50:44-PDT (Wed)
From: hplabs!sdcrdcf!sdcsvax!noscvax!goodhart @ Ucb-Vax.arpa
Subject: Re: Interlisp/Zetalisp Compatibility
Article-I.D.: noscvax.583

Symbolics (phone- (213) 473-6583) publishes "Interlisp Compatibility Package
User's Guide" which discusses compatibility issues between Interlisp and
Zetalisp.

------------------------------

Date: 14 Aug 84 14:49:29-PDT (Tue)
From: decvax!mcvax!enea!erix!seeb @ Ucb-Vax.arpa
Subject: Re: Charniak's "AI-Prog."book:Code Reqst
Article-I.D.: erix.550

Sure, when I first got my hands on that book that was my reaction too:

           "Translation please!".

But there wasn't any around just then so I had to work my way through it
by myself. And I discovered something:

Exercises in reading different Lisp dialects are actually good for you,
even when they are crammed with programmer-defined macros. After all,
being able to read is half the secret of communication. If you can't
do it --- well, you run into trouble ("Translation please!").

So my advice in this case is: Do it yourself!

/Sten-Erik E Bergner - LM Ericsson

------------------------------

Date: 13 Aug 84 8:21:00-PDT (Mon)
From: pur-ee!uiucdcsb!nowicki @ Ucb-Vax.arpa
Subject: Re: Common Lisp - (nf)
Article-I.D.: uiucdcsb.5500009


I am also interested in such info. We have Sun-2's running 4.2 and I am
interested in obtaining Common Lisp for them.


-Tony Nowicki
{decvax|inuxc}!pur-ee!uiucdcs!nowicki

------------------------------

Date: Fri, 17 Aug 84 15:18:54 pdt
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Common Lisp.


I submitted earlier a query about Common Lisp.  Here's the rundown
on what I've come upon so far.

    ... Common Lisp is to be released by DEC for VAX/VMS
        ``in this quarter''.
    ... DEC developed Common Lisp at CMU using its money and
        apparently many of its own personnel, so they own
        the code developed.
    ... DEC developed a version to run under 4.n bsd VAX/Unix
        along with the VAX/VMS version, but the VMS version
        had priority, so the VAX/Unix version is lagging behind
        a bit.  They are supposed to release one soon, however.
    ... There is no version running on 68000's at present,
        through apparently some companies are working on it.
    ... There may be a Common Lisp mailing list (Common-Lisp@SU-AI),
        but I haven't determined this for certain.  I asked to be
        put on the list but haven't heard anything back.

                                                   Harry Weeks
                                                   (Weeks@UCBpopuli)

------------------------------

Date: 17 Aug 84 12:22:34 EDT
From: KYLE.WBST@XEROX.ARPA
Subject: Brain Theory and Language

Re: David's request of 13 Aug 84 on Brain Theory-Language & EEG
(Warfian Hypothesis from Sde@Mitre-bedford.ARPA)

A team at Washington University in St. Louis, Mo. developed PET Scan
techniques (Positron Emission Tomography) that allowed for real time
monitoring of brain response to various external stimuli (sound, light,
etc.). The key people are now at UCLA , I think.

You can read about this work in Science News, or in Science magazines.
There was also a booth at the Toronto APA meeting in 1982 describing
various applications for PET technology, and I seem to recall a paper
decribing some of the work you mentioned with musicians.

One of the magazine articles , I think, mentioned differences between
musicians who had become conductors of orchestras vs, those who had not
re:left & right brain activity. Musicians also respomd differently to
non musical sound inputs such as alarms , ambulance sirens, etc. and it
seems to have something to do with their training.

As you probably know, the entire issue is clouded by left handed people.
They seem to fall into 3 categories: mirror images of right handed ones
(i.e. speech is in the lefty's right brain hemisphere, etc),  same as
right handed people, and non-hemisphere specific (i.e. both halves share
same functions with no dominance). The latter group seems to always have
inner conflict and trouble making decisions.

PET techniques require access to the right expensive gear to make the
short lived radio isotopes that are presented to the subject's brain via
food stuffs that resemble sugars the brain can use. The emitted
radiation is picked up by gear similar to an X-ray CAT scanner and
presented to the researcher on a CRT. What one sees is a profile of
brain metabolism in response to various stimuli. It is superior to the
EEG in the sense that you can see dynamic resource allocation as a
function of problem solving. It would provide interesting study for
those interested in emulating nature's parallel processor. As I recall,
work has also been done using the technique to monitor activity during
math problem solving exercises.

One last note of reference, APA stands for American Psychiatric (or
Psychological...I forget) Association. The 1982 convention I attended
was at the Sheraton Plaza complex in Toronto, and a proceedings was
generated so a good library should be able to locate a copy.

Earle.

------------------------------

Date: 14 Aug 84 23:39:09-PDT (Tue)
From: decvax!decwrl!flairvax!kissell @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: flairvax.720

(Norm Andrews challenges Ray Chen's agnosticsm on cause and effect)

> The concept of proof depends upon the concepts of cause and effect, among
> other things.

This is simply not true.  The notion of logical proof involves implication
relationships between discrete statements in discourse.  This is an agreed
upon rule of the game.  Causality assumes implication relationships between
discrete events in the world.  The universe may or may not argue like a
philosopher, and it is not always clear what constitutes a "discrete" event.

> So what's causality?  The law of identity applied to action.  Things do
> what they do, in any given context, BECAUSE they are what they are.

This is a denial of causality, not a definition.  If things do what they
do because they are what they are, then they certainly can't be *caused*
to do anything by something else.

Unless, of course, the only *thing* is everything.

uucp: {ihnp4 decvax}!decwrl!\
                             >flairvax!kissell
    {ucbvax sdcrdcf}!hplabs!/

------------------------------

Date: Mon 13 Aug 84 19:40:34-PDT
From: Richard Pattis <PATTIS@WASHINGTON.ARPA>
Subject: Name the Presidential Candidate who wrote...

Name the presidential candidate who wrote the following:

  It was in that mood, living day after day with this matter principally
  occuring in my mind, that I halted such peripheral considerations as the
  spur-of-the-moment Plotto simulation exemplified, and resolved to go
  directly to the crux of the matter of "artificial intelligence."

  It was simple enough for me to do.  A knowledge of both analog and
  computer principles, philosophical rigor, and my competence in
  economics: it was a simple matter to lay out in my mind a worldwide
  network of task-oriented, linked computers performing production of all
  human needs, including the building of task-oriented computers like
  themselves.

  Such an array is the precondition for supposing that "artificial intelli-
  gence" in computers might be approximated, at least in the form of conscious
  powers of deduction.  Since human consciousness and intelligence depend on
  what Kant terms the synthetic a apriori processes, and since there is no
  configuration of the indicated sorrt of model which could accommodate such
  synthesis, there is no way in which any form of computer could become
  willful in a human sense of willful intelligence.

  It was obvious, on less rigorous grounds, than no computer could synthesize
  intelligent behavior in the manner Minsky and others were approaching this.
  That was the simple case to prove.  Minsky's problem was that he proceeded
  in ignorance of even a Feuerbachian model of the determination of
  intelligence.


There is more, but it becomes less focused, and I became tired of typing.  As
a hint, the principal accomplishment of the author of this quotation is, "...
that of being, by a large margin of advantage, the leading economist of the
twentieth century to date."

Rich

------------------------------

Date: Tue, 14 Aug 84 17:09 EDT
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: Project Reports - UPenn


THE CENTER FOR ARTIFICIAL INTELLIGENCE in the Department of Computer
and Information Science at the University of Pennsylvania has received
a major award from the U.S. Army Research Office for research and
education in Artificial Intelligence.

The award is for $7.2 million together with a supplementary DOD-URIP
award of $500,000, a total of $7.7 million over a period of five years.
The award will support faculty and technical staff and provide graduate
research fellowships and research equipment. The contract is from the
Electronics Division of the Army Research Office under the direction
of Dr. Jimmie Suttle.

Principle Investigator for the grant is Professor Aravind K. Joshi and
co-Principle Investigators are Professors Norman Badler, Ruzena Bajcsy,
Peter Buneman, and Bonnie Webber.

Pennsylvania's CENTER FOR ARTIFICIAL INTELLIGENCE is located in the
Department of Computer and Information Science but includes members from the
departments of Mechanical Engineering, Electrical Engineering, Linguistics,
Philosophy and Psychology as well as the Wharton School and the School of
Medicine.  Primary research interests include natural language processing,
flexible communication with knowledge bases, programming languages and
knowledge bases, automated reasoning and expert systems, computer
interaction in three dimensions, interaction of visual and tactile
information, robotics, analysis and synthesis of motion, computer graphics
and animation, computational logic, and the design of languages for
representing and manipulating knowledge.

The CENTER FOR ARTIFICIAL INTELLIGENCE has been the recipient of
several other major grants recently, including an NSF Coordinated
Experimental Research grant ($3.8 million for five years), an IBM grant
for new ventures in Computer Science ($1 million), a Sloan Foundation
grant for Cognitive Science ($1.0 million), an Air Force Office of
Reseach grant for Query Driven Vision System ($1 million), a grant from
NASA for Human Body Motion Modelling ($800,000), and several grants
from the NSF Intelligent Systems Division.

Students interested in applying for graduate admission should write to:

     Professor Peter Buneman, Graduate Group Chair
     Department of Computer and Information Science,
     The Moore School
     University of Pennsylvania
     Philadelphia, PA 19104

Inquiries concerning faculty positions (regular and visiting) and research
staff positions should be directed to Professor Aravind K. Joshi at the same
address as above.

------------------------------

End of AIList Digest
********************

∂21-Aug-84  1735	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #108    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Aug 84  17:34:49 PDT
Date: Tue 21 Aug 1984 15:41-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #108
To: AIList@SRI-AI


AIList Digest           Wednesday, 22 Aug 1984    Volume 2 : Issue 108

Today's Topics:
  LISP - Common Lisp & Lisp/VM
----------------------------------------------------------------------

Date: Mon, 20 Aug 1984  15:42 EDT
From: Skef Wholey <Wholey@CMU-CS-C.ARPA>
Subject: Common Lisp

    From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
    Subject: Common Lisp.

        ... DEC developed Common Lisp at CMU using its money and
            apparently many of its own personnel, so they own
            the code developed.

DEC's Common Lisp was based on Spice Lisp -- a portable Common Lisp
implementation (written almost entirely in Common Lisp) for personal
workstations.  The Spice Lisp code is in the public domain, but the
VAX-specific compiler and runtime code is owned by DEC.  They have certainly
made big changes even in the Lisp-level code so that our sources may be very
different today, but most of the work done at CMU was on a portable,
public-domain implementation.

Perq Systems is (or will be) selling Common Lisp for the Perq -- their
implementation IS Spice Lisp, with little or no change.  Right now it is
jointly maintained by Perq and CMU.  Data General has also announced a Common
Lisp (based on Spice Lisp as well).  Symbolics is currently working on a Common
Lisp Compatability Package (CLCP) that is NOT based on Spice Lisp.  Because of
the strong similarity of Common Lisp and Zetalisp, such a compatability package
is feasible.  I've "ported" two large (source code at least 100K characters)
Common Lisp programs from the Perq (in Spice Lisp) to the 3600 (with CLCP) with
almost no modification.

        ... There may be a Common Lisp mailing list (Common-Lisp@SU-AI),
            but I haven't determined this for certain.  I asked to be
            put on the list but haven't heard anything back.

That mailing list was used during the design of Common Lisp and has been pretty
quiet lately except for nit-picking issues implementors worry about.  There is
currently no "Common-Lisp-Users" mailing list, but one could be created if
Common Lisp was deemed inappropriate material for AIList.

--Skef

------------------------------

Date: 2 Aug 1984 17:06:53-EDT (Thursday)
From: Mark N. Wegman <WEGMAN.YKTVMZ%ibm-sj.csnet@csnet-relay.arpa>
Subject: Lisp/VM - History and Overview

   [This is a response to my request for more information.
   I have broken the message into two parts. -- KIL]


            A SHORT DESCRIPTION OF LISP/VM
            ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

           A    complete   LISP    system,    LISP/VM   comprises    an
           interpreter/compiler for  its LISP language and  an environ-
           ment  that includes  a syntax-oriented  editor and  run-time
           debugger.

           History

           Created  at  the  IBM  Thomas  J.  Watson  Research  Center,
           Yorktown Heights, New York, LISP/VM grew out of a decade and
           a half of experience producing LISP systems.  Early LISP de-
           velopments at  T. J. Watson  were derived from the  LISP 1.5
           developed by John  McCarthy of MIT and  distributed by SHARE
           for the  IBM 704.  Subsequent modifications  and conversions
           by Fred Blair, ran on IBM  7090, 7094, 7040 and 7044 comput-
           ers.  This created and supported a LISP user community at T.
           J. Watson that has persisted to this day.

           After  the introduction  of System/360  in the  1960's, Fred
           Blair,  assisted  by  James Griesmer,  Mark  Pivovonsky  and
           Joseph Harry,  produced LISP/360, which inherited  much from
           the LISP 1.5 tradition.  LISP/360  ran in both the batch en-
           vironment  of OS/360  and  the  time-sharing environment  of
           TSS/360 and  VM/CMS.  The advent  of the IBM  System/370 and
           the limitations of the 18-bit  address space led to the cre-
           ation in the  mid-1970s of LISP/370.  In  the natural evolu-
           tion of a system created in a research environment, LISP/370
           diverged in its  semantics from both LISP/360  and LISP 1.5.
           It was eventually frozen as  an IUP (installed user program)
           and made available to internal IBM sites and some customers.

           In 1978, a new LISP project that would substantially enhance
           the capabilities  of LISP/370  was started.   Cyril Alberga,
           Martin Mikelsons, and  Mark Wegman were the  authors of this
           new LISP system, called YKTLISP,  for Yorktown LISP.  In ad-
           dition to being enhanced  functionally, YKTLISP was provided
           with a  sophisticated programming environment so  that users
           would be encouraged to  write maintainable and readable LISP
           programs.  It  has been used  extensively within IBM  and is
           now released publicly as  LISP/VM.  Contributors to the ref-
           erence manual  included the  system authors, John  Sowa, and
           Mary Van Deusen.

           Many  people have  participated in  discussions and  reviews
           which have contributed to the  quality of LISP/VM.  They in-
           clude:   Marc  Auslander,  Len  Berman,  Fred  Blair,  Chris
           Bosman-Clark, Alan  Brown, Larry Carter, Ashok  Chandra, Ken
           Chatfield, Alan Cobham, Walt  Daniels, James Davenport, Doug
           DeGroot, Cay  Dietrich, Pat Goldberg, Jim  Griesmer, Se June
           Hong,  Dick Jenks,  Paul Kosinski,  Vincent Kruskal,  George
           Leeman,  Victor  Miller,  Jim   Moore,  George  Radin,  J.A.
           Robinson,  Dick Ryniker,  Marshall Schor,  John Sowa,  Barry
           Trager, Jean Voldman, and Karen Woolhouse.

           LISP/VM Overview

           The LISP/VM Reference Manual describes in detail the facili-
           ties  and operators  available in  LISP/VM.  The  purpose of
           this description  is to list  the major features  of LISP/VM
           and some of the implementation  details to allow the experi-
           enced LISP programmer  to compare LISP/VM to  other LISP im-
           plementations.

           LISP/VM is  an interactive  LISP system for  use on  the IBM
           System/370 computer.   A program development  environment is
           provided which supports:

           ?   A structure editor for LISP functions and data which al-
               lows the  creation and modification of  objects from the
               file system and from the dynamic store.

           ?   An interactive interpreter which uses the editor to dis-
               play the course of program execution.

           ?   An indexed file system allowing access to LISP functions
               and data individually and as collections.

           ?   A compiler which will produce either immediately execut-
               able  functional  objects,  or  relocatable  objects  in
               files.

           ?   Carefully designed  compiler and  interpreter semantics.
               In most  practical cases, interpreted and  compiled code
               are fully interchangeable.

           ?   An error handler which  will, under user control, either
               return to the command level or enter a primitive command
               level from which the state of the computation may be ex-
               amined.

           ?   Pattern matching during lambda-binding and assignment.

------------------------------

Date: 2 Aug 1984 17:06:53-EDT (Thursday)
From: Mark N. Wegman <WEGMAN.YKTVMZ%ibm-sj.csnet@csnet-relay.arpa>
Subject: Lisp/VM - Features


           LISP/VM Features

           ?   Data types


               ?   Numbers include small integers  (up to 26 bits), in-
                   tegers of arbitrary precision (bignums) and floating
                   point numbers (7 bit exponent and 56 bit fraction).

               ?   Identifiers include characters and gensyms.

               ?   Pairs (lists or conses)

               ?   Vectors  of  arbitrary objects,  integers,  floating
                   point numbers, characters, or bits (Boolean values).

               ?   Hashtables can hash on pointers or structure.

               ?   Readtables allow prefix and infix character and dis-
                   patch macros.

               ?   State descriptors  capture the control  and environ-
                   ment components of an evaluation in a single object.

               ?   Streams are the interface to data in external files.

               ?   Functions and macros,  both interpreted and compiled
                   are ordinary data objects.

               ?   Funargs combine a function or  macro and a state de-
                   scriptor to provide a form of closure.

           ?   Scope and Extent

               ?   Variable bindings normally have lexical scope.

               ?   Variable bindings  that are declared fluid  have dy-
                   namic scope.

               ?   Control points established by CATCH and related con-
                   structs have dynamic scope.

               ?   Labels (or GO targets) have lexical scope.

           ?   Evaluation rules

               ?   Self-evaluating  forms  (constants) consist  of  the
                   numbers, character vectors, bit vectors and NIL.

               ?   Identifiers  (symbols or  variables)  evaluate to  a
                   current  lambda-bound value  or to  a default  value
                   stored in a non-lambda environment.

               Special forms as well as macros and function invocations
               are recognized by evaluating the CAR of a compound form.
               This feature allows  all operators in LISP/VM  to be re-
               defined by the user in order to modify or extend the se-
               mantics of the language.

               ?   The special forms are:

                     CLOSEDFN        FUNCTION        PROGN
                     COND            GO              QUOTE
                     EVALQ           GVALUE          RETURN
                     EXIT            LAMBDA          SEQ
                     F*CODE          MLAMBDA         SETQ

               ?   Macros  are applied  to  the invoking  form and  the
                   value of the macro call is re-evaluated.  A macro is
                   defined by assignment or lambda-binding.

               ?   Functions may have a fixed or variable number of ar-
                   guments.   Functions are  defined  by assignment  or
                   lambda-binding.

               ?   The bound variable  part of functions and macros may
                   be a pattern that specifies  the structure of an ar-
                   gument.  Specified components of  an argument may be
                   bound to distinct variables.

           ?   In addition  to all  the usual  type and  numeric predi-
               cates, LISP/VM includes three forms of structural equal-
               ity:

               ?   EQUAL is the traditional  equality test.  Atomic ob-
                   jects are EQUAL if  their external forms are identi-
                   cal.   Two composite  objects are  EQUAL if  for any
                   combination of  access operations  the corresponding
                   components are EQUAL.

               ?   UEQUAL     test    for     structural    equivalence
                   (isomorphism).

               ?   UGEQUAL tests for  structural equivalence and allows
                   different gensyms  in two objects if  their patterns
                   of occurrence are isomorphic.

               ?   All three equality predicates terminate for all data
                   objects, including objects  with shared and circular
                   structure.

           ?   Control structure

               ?   Simple sequencing  constructs include  PROGN, PROG1,
                   PROG2 and COND.

               ?   SEQ defines a sequential context and a scope for la-
                   bels and EXIT expressions.

               ?   PROG defines a scope for labels  as well as a set of
                   variables, and a scope for RETURN expressions.

               ?   Iterators include  the MACLISP-style  DO as  well as
                   most of the INTERLISP iteration constructs.

               ?   Mapping  operators include  MAP, MAPCAR,  MAPCAN, as
                   well as operators that map over vectors.

               ?   Non-local  exits  are  possible with  CATCH,  THROW,
                   THROW-PROTECT, and similar constructs.

               ?   State  saving and  application allows  co-routining,
                   backtracking and other non-LIFO control models.

           ?   Identifiers (symbols) have a  pname and a property list.
               There is  no function-value  cell, since  function defi-
               nition is assignment.

               Characters are not  a special case in  LISP/VM, they are
               simply the identifiers with a one-character pname.

               Gensyms are identifiers that  are created anew each time
               they are read.

           ?   Operations on  numbers include all the  usual arithmetic
               functions and a set of in-line operators for small inte-
               ger arithmetic.

           ?   Operations on lists normally  terminate or signal an er-
               ror when the list is circular.

           ?   Hash tables   allow hashing  on pointers,  structure, or
               the contents of character strings.

           ?   Operations on vectors  include specialized operations on
               character vectors (strings).

           ?   Structured data  definitions allow  named access  to the
               components of a data object.

           ?   The compiler can be  invoked dynamically to compile from
               and/or to the  LISP/VM heap or external  files.  In most
               cases, compiled  and interpreted definitions  are equiv-
               alent and interchangeable.

               Compilations take place in  an environment that may con-
               tain alternate definitions for  operators.  Thus, an op-
               erator can  have a  functional definition in  the normal
               environment, and  a macro definition that  emits in-line
               code in the compile environment.

           ?   Streams allow  parsed or character-oriented  data trans-
               mission between LISP/VM and the external file system.

           ?   Input/Output

               ?   The PRINT function produces an external form for ev-
                   ery LISP/VM  data object.  In addition,  shared sub-
                   structure  is revealed  by markers  in the  external
                   form; these markers are recognized by the READ func-
                   tion.

               ?   The input  syntax is determined by  a readtable that
                   defines a wide variety of character attributes.

           ?   Storage allocation

               ?   There  are no  fixed allocations  of storage  within
                   LISP/VM.  The  boundaries between heap,  stack, com-
                   piled  programs, etc.,  are adjusted  dynamically to
                   make full use of the available memory.

               ?   Garbage collection  is done  by a  copying algorithm
                   that takes  time proportional to the  amount of data
                   that survives the collection.

           ?   A MACLISP compatibility package allows existing applica-
               tions to be compiled for use in the LISP/VM environment

------------------------------

End of AIList Digest
********************

∂25-Aug-84  1857	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #109    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 Aug 84  18:57:45 PDT
Date: Wed 22 Aug 1984 09:39-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #109
To: AIList@SRI-AI


AIList Digest           Wednesday, 22 Aug 1984    Volume 2 : Issue 109

Today's Topics:
  Administrivia - Late Mailing,
  Man-Machine Interface - Ebstein/Turing Tests,
  Psychology - APA Acronym & Personality Tests,
  Applications - CLARIFY: On-Line Guide for Revising Technical Prose,
  Seminars - Mechanisms for Learning & Medical Knowledge Acquisition
    & Functional Languages and Parallel Computers,
  Conference - National ACM'84 Meeting
----------------------------------------------------------------------

Date: 25 Aug 84  1800 PDT
From: Kenneth I. Laws <Laws@SRI-AI>
Subject: Late Mailing

This issue, # 109, is being mailed after issue # 110 due my accidentally
deleting the file instead of sending it out.  Fortunately the SRI systems
staff was able to recover a copy.  I apologize for not getting the seminar
notices out in reasonable time.  -- KIL

------------------------------

Date: 21 Aug 84  1136 PDT
From: Peter Blicher <PB@SU-AI.ARPA>
Subject: ebstein/turing tests

As I continue to deal with bureaucracies and uninterested workers, it is
becoming clearer that the Turing test will soon be passed, but not because
of any improvement in technology.

------------------------------

Date: 20 Aug 1984 00:01:00 EDT
From: Richard F. Hartung <HARTUNG@USC-ISI.ARPA>
Subject: What APA stands for.


   APA stands for both American Psychological Association and American
Psychiatric Association, two separate organizations

    A friend who attends all these things tells me that the Psychological
org is meeting in Toronto this year (in about a week) and met their last in
1981 or 1980.  In 1982 it was in Washington D.C.  Although he doesn't
remember off hand where the Psychiatric org. met in 82, this was probably
the meeting refered to in the last AILIST (conclusion by default).

                        Michael Moran
                        Lockheed Advanced Software Laboratory

------------------------------

Date: Sat 4 Aug 84 18:52:04-PDT
From: Stuart McLure Cracraft <G.MCLURE@SU-SCORE.ARPA>
Subject: Re: Any online Personality Tests?

     [Forwarded from the Stanford bboard by Laws@SRI-AI.  This is
    in response to Stanford query about on-line personality tests.]


The book you want is
        PLEASE UNDERSTAND ME (absurdly chosen title)
        by David Keirsey and Marilyn Bates

It contains a 70-question instrument, the Keirsey Temprament Sorter,
based on the psychological typology developed by Carl Jung and
others (Myers-Briggs). The Myers-Briggs instrument which is the
true instrument is not available to the general public. Distribution
of it is limited to psychology grad students and psychologists/psychiatrists.
The Keirsey test is the best substitute I've found.

[...]

Several psychologists I have spoken with indicate the Jung typology
tests such as the Myers-Briggs and Keirsey are gaining recognition
as extremely deep tests. Although I have no formal degree in
psychology, I feel that the Jung typology is vastly more deep than
the Rorschach, Minnesota Multi-Phasic, and California Psychological
inventories.

The Myers-Briggs was eschewed by earlier researchers and is only
recently over-coming its "bad" reputation. The correlations between
personality types and profession, marriage, partners, etc are
very significant.

To me, the Jung typology is the most profound psychological work
done in this century by anyone.

        Stuart

------------------------------

Date: Wed 15 Aug 84 12:01:49-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: CLARIFY: Rand's On-Line Guide for Revising Technical Prose

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


CLARIFY: Rand's On-Line Guide for Revising Technical Prose
Rand Report: N-2037-RC by M.E.Vaianan, N.Z.Shapiro and M.L.LaCasse 1983

This note describes the development and testing of CLARIFY, a computerized
writing aid designed to assist writers in revising technical prose. CLARIFY
is not a traditional readability formula; its design reflects research on how
English speakers understand sentences. CLARIFY flags sentences that have
certain patterns of nominalizations, prepositional phrases, and forms of the
verb to be.  The choice of these features reflects research which suggests
that the dominant strategy employed by English speakers in interpreting
sentences is to assume a subject-verb-object (SVO) structure.  The features
that CLARIFY flags are good surrogate indicators that a sentence does not have
an SVO structure, and therefore, that the initial interpretive strategy will
be unsuccessful.  In developing CLARIFY, the authors tested various patterns
of these features, and obtained user comments about the system's usefulness
and effectiveness.  Like all computerized writing aids, CLARIFY hs
limitations, which are discussed in the note. CLARIFY is in general use at
the Rand Corp. where it is also continuing to be tested 55pp**
**From Selected Rand Abstracts

Rand publications are located in the Green Library.
H. Llull

------------------------------

Date: 08/14/84 12:35:48
From: ROSIE
Subject: Seminar - Mechanisms for Learning

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


            DATE:      Thursday, August 16, 1984
            PLACE:     NE43-7th Floor Playroom

               MECHANISMS FOR LEARNING

                  Kamesh Ramakrishna
        Ohio State University, Columbus, Ohio


A number of different mechanisms for machine learning have been
proposed, though the definition of "learning" itself has not been
particularly clear.  We show that many proposed learning mechanisms
can be placed into two classes and that mechanisms within each class
are reducible to each other.  These two classes correspond roughly to
the "knowledge acquisition" and "skill refinement" classes proposed by
Mitchell, Carbonell, and Michalski; however, (and more interestingly)
they correspond to the two different levels of knowledge-based
processor architecture proposed by Newell in "The Knowledge Level".
The knowledge acquisition type learners appear to be at the
Symbol/Program level.  This observation lets us integrate this
approach to machine learning with the taxonomy of problem-solving
types proposed by Chandrasekaran et al., leading to the hope of an
integrated knowledge-level approach to both problem-solving and
learning.

With appropriate restrictions placed on the functioning of the
learning mechanisms, we show that the two classes also differ in the
fundamental learning problem that they solve.  We identify some
learning problems that are not solved by either class -- identifying
some possible future directions for research.

HOST:  Prof. Ramesh Patil

------------------------------

Date: Tue 21 Aug 84 09:17:27-PDT
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: Seminar - Medical Knowledge Acquisition

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

                          RIPLUNCH

SPEAKER:     Larry Fagan, Joan Differding, and Mark Musen
             Medical Computer Science Group

TOPIC        OPAL: Practical Knowledge Acquisition for ONCOCIN

DATE:        Friday, August 24, 1984
LOCATION:    Chemistry Gazebo, between Physical & Organic Chemistry
TIME:        12:05

  We will  discuss our  design of  the ONCOCIN  knowledge  acquisition
framework named OPAL.  ONCOCIN is  designed to assist physicians  with
the management of cancer treatment plans.  A number of these treatment
plans, called protocols,  have been  entered into  the ONCOCIN  system
using low level  tools.  We  have recently built  a protocol  oriented
knowledge  acquisition  system  designed   directly  for  the   cancer
specialist (oncologist).  The OPAL knowledge acquisition subsystem  is
graphically based and represents our analysis of the common components
of cancer treatment plans.

------------------------------

Date: Tue 21 Aug 84 11:11:30-EDT
From: Pamela Sedell <MAP@MIT-XX.ARPA>
Subject: Seminar - Functional Languages and Parallel Computers

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


          "Functional Languages and Parallel Computers"


                       John Hughes
           Oxford University Computing Laboratory
             Programming Research Group, Oxford

                     Friday, August 24, 1984
                         NE43-512, 2:15

  We introduce functional programming and show how "real programs"
  such as simple operating systems can be written functionally.  We
  also explain why functional languages are particularly useful for
  programming parallel computers.  We have discovered that the
  relationship between functional languages and parallel computers
  is closer than previously suspected:  functional languages actually
  require a parallel implementation if they are to use memory efficiently.
  We introduce two simple constructs which allow the functional programmer
  to exert close control over memory requirements, and give a number of
  examples to illustrate them in use.  The new constructs can also be
  used to implement "non-deterministic unmerge".

------------------------------

Date: 13 Aug 84 10:08:19-PDT (Mon)
From: hplabs!hao!ames!eugene @ Ucb-Vax.arpa
Subject: National ACM'84 meeting San Franciso, CA
Article-I.D.: ames.472

I am posting the following as a request of Lew Bornmann of the ACM'84
publicity committee.  Please mail any requests for information to:

      bornmann@ames-nas-gw    [or bornmann%ames-nas-gw@su-score]

=======================================================================

                ACM-84:  The Fifth Generation Challenge

What:   ACM-84, the Association for Computing Machinery's 1984 Annual
        Conference.
When:   October 8 to 10, 1984, with an "Early Bird" reception on Sunday,
        October 7.
Where:  At the San Francisco Hilton and Tower, Mason and O'Farrell Streets,
        San Francisco
Theme:  The Fifth Generation Challenge

The Conference will examine:

        The Impact of the Fifth Generation.
                Specifically, the effect that Fifth Generation computers
                will have over the next decade on society, industry, the
                professions, and computer science.
        The Building Blocks of the Fifth Generation.
                An examination of current developments, new techniques, and
                new products which will take computing into the 1990s.
        The Character of Integration...
                in the Fifth Generation.  How the Fifth Generation building
                blocks will fit together, and the impact of integration.

The technical conference program will complemented by:
        o       Professional Development Seminars.
        o       An exhibit program.
        o       An educators' program.
        o       A computer chess championship.

Social events will include a "Themes of San Francisco" gala evening and an
awards luncheon.

Special travel arrangements have been made with Corporate Travel Services of
Sunnyvale, Ca.  These include discounted air fares and pre- and
post-conference tours.  (CTS toll-free phone number:  800/851-3478; in
California:  call 408/734-9990 collect.)

Advance Registration Fees:
        $110.00 ACM Members
        $150.00 Non-ACM Members

Accommodations:  Blocks of rooms for ACM-84 have been reserved.  Please
contact the Hilton directly for reservations.  When calling, specify ACM-84
for reduced rates.
        Director of Front Office Operations
        San Francisco Hilton Tower
        Mason and O'Farrell Streets
        San Francisco, Ca.  94102
        (415)771-1400

Room rates:
        Singles begin at $67
        Double begin at $87

For any additional information, contact:
        (415)948-6306

------------------------------

End of AIList Digest
********************

∂24-Aug-84  1514	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #110    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Aug 84  15:13:31 PDT
Date: Fri 24 Aug 1984 11:33-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #110
To: AIList@SRI-AI


AIList Digest            Friday, 24 Aug 1984      Volume 2 : Issue 110

Today's Topics:
  Inexact Reasoning - Panel Discussion at AAAI-84,
  LISP - VAX VMS LISP,
  AI Tools - IBM-PC/Family Tools & Expert System Planner & Taxonomy,
  Expert Systems - Question about HEARSAY-II,
  Games - Chess Strategy and Protocols,
  Seminar - WYSIWYG Programming
----------------------------------------------------------------------

Date: Thu, 23 Aug 84 08:29:19 PDT
From: Koenraad Lecot <koen@UCLA-LOCUS.ARPA>
Subject: Inexact Reasoning Panel Discussion at AAAI-84

Does anybody have a tape and/or summary of what has been said during
the AAAI-84 panel discussion on inexact reasoning ?

-- Koenraad Lecot

 Arpa : koen@ucla-locus
 uucp : ...ucla-cs!koen

------------------------------

Date: 18 Aug 84 10:49:50-PDT (Sat)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!ncsu!uvacs!gmf @ Ucb-Vax.arpa
Subject: VAX VMS LISP ?
Article-I.D.: uvacs.1453

I would appreciate information about LISP interpreters for a VAX
11/780 running VMS.  Thanks in advance.

          Gordon Fisher
          ...uvacs!gmf

------------------------------

Date: 22 Aug 84 8:18:16-PDT (Wed)
From: ihnp4!mhuxl!ulysses!unc!mcnc!rti!rti-sel!crm @ Ucb-Vax.arpa
Subject: Re: VAX VMS LISP ?
Article-I.D.: rti-sel.1182

Check out Common LISP, a new product for VMS which (I am told) is very like
InterLISP...

You can buy the book about it from Digital Press.  Call your local DEC office
for details.

Charlie Martin

------------------------------

Date: Thu, 23 Aug 84 14:38 MST
From: LMiller%pco@CISL-SERVICE-MULTICS.ARPA
Subject: AI tools for IBM-PC/family

I  am  looking  for leads for two kinds of tools to run on models of the
IBM-PC (or compatibles).  The tools are: 1) expert system building tools
(have seen Expert-Ease and M.1 and need more than just a good LISP/PROLOG)
and  2)  CAE  graphic design aids for software engineers (a la CAD tools
for  hardware).   I  know  such  tools exist for somewhat large machines
(e.g.   Vaxen,  Appollo, Sun ...) but our needs are for something on the
order of a super/micro.  If you've seen such tools and can provide phone
numbers or addresses of vendors I would appreciate your help.  Thanks

Lance Miller
(LMiller%pco@CISL)

------------------------------

Date: 22 Aug 84 15:05:41-PDT (Wed)
From: ihnp4!drutx!druxx!jlmalito @ Ucb-Vax.arpa
Subject: expert system ``planner'' wanted
Article-I.D.: druxx.604


We are currently planning on building a UNIX-based expert system to
handle ``system administration'' on a computer system.  My plan is to
build a knowledge base containing facts about the computer system, as
well as descriptions of possible actions and their consequences.  The
expert system would be presented with a description of the current world,
and a goal.  The planner (``inference engine'') will determine what
actions are needed to get from the current world to a goal state.

Due to time constraints, we are trying to find a planner that can
accomplish the task described above.  If anyone knows of such a planner
(or a system containing one), please contact me.  We need source,
preferably for a UNIX system.  Anything close will do.  We would
consider any purchase agreement and gladly accept freebies.

Quick responses would be appreciated.  (Also, any ``Have you
checked...?'' would be great.)

Any responses of general interest will be posted.

thanks,

Jeanine L. Malito
{ihnp4,allegra}!druxx!jlmalito

AT&T Information Systems
Rm. 30G73
11900 N. Pecos St.
Denver CO  80234

303-538-3859

------------------------------

Date: 24 Aug 84 0:40:07 EDT
From: KYLE.WBST@XEROX.ARPA
Subject: Taxonomic System

RE: BILL MANN'S REQUEST FOR TAXONOMY INFO.

IN THE LATE 1960'S, CESARE CACERES DEVELOPED PROGRAMS TO ANALYZE EKG
HEART WAVEFORMS WITH THE U.S. PUBLIC HEALTH SERVICE IN ORDER TO CLASSIFY
PATTERNS THAT DEVIATED FROM THE NORM TO SPOT EARLY HEALTH PROBLEMS.

HE ALSO DEVELOPED STREP THROAT BACTERIA CLASSIFICATION PROGRAMS IN
CONJUNCTION WITH THE HOFFMAN LA ROCHE COMPANY.

ABOUT THIS SAME TIME, SANDIA LABS DEVELOPED A BACTERIA CLASSIFICATION
PROGRAM.

ALSO AT ABOUT THIS SAME TIME THE ARMY AT EDGEWOOD ARSENAL DEVELOPED A
CLASSIFICATION PROGRAM FOR VARIOUS CHEMICALS THAT HAD TOXIC PROPERTIES.

MOST OF THE INFO ON THESE DEVELOPMENTS ARE IN THE OPEN LITERATURE IN THE
OLD ASTIA DOCUMENT CLASSIFICATION SYSTEM.

------------------------------

Date: 22 Aug 1984 22:05:18-PDT
From: doshi%umn-cs.csnet@csnet-relay.arpa
Subject: Question about HEARSAY-II.

I have a question about the HEARSAY-II system [Erman et.al.1980].

What exactly is the HEARSAY system required/supposed to do ?
i.e. what is the meaning of the phrase :
      "Speech Understanding system"


Honestly, I did go thru [Erman+ 1980] carefully. I can quote the following :

      page 213 : "The HEARSAY-II system....represents both a specific
                  solution to the speech-understanding problem and a
                  general framework for co-ordinating independent
                  processes to achieve cooperative problem solving
                  behaviour."

      page 213 : "The HEARSAY-II reconstructs an intention ....      "

      page 214 : "The HEARSAY-II recognises connected speech in .... "

      page 234 : (this is a footnote)
                 "IBM has been funding work with somewhat different
                  objective... Its stated goals mandate little reliance
                  on the strong syntactic/semantic/task constraints
                  exploited by the DARPA projects. This orientation is
                  usually dubbed SPEECH RECOGNITION as distinguished
                  from SPEECH UNDERSTANDING."

       page 233 : "DARPA speech understanding system performance goals.....
                                -------------                    -----
                  The system should
                      - Accept connected speech
                      - from many
                      - cooperative speakers of the General American Dialect
                      - in a quiet room
                      - using a good-quality microphone
                      - with slight tuning per speaker
                      - requiring only natural adaption by the user
                      - permitting a slightly selected vocabularu of 1000 words
                      - with a slightly artificial syntax and highly
                        constrained task
                      - providing graceful interaction
                      - tolerating less than 10 % semantic error

                        [this is the only direct reference to `understanding`
                         or `semantics`]

                      - ....... "

Let me explain my confusion with examples. Does the system do one of the
following :

      - 1) Accepts speech as input; Then, tries to output what (ever) was
          spoken or might have been spoken ?

      - 2) Or, accept speech as input and UNDERSTAND it ?

Now, the 1) above is, I think speech RECOGNITION. DARPA did not want just that.

Then, what is(are)  the meaning(s) of UNDERSTAND ?

      - if I say "Alligators can fly", should the system repeat this and also
        tell me that that is "not true"; is this called UNDERSTANDING ??

      - if I say "I go house", should the system repeat this and also add that
        there is a "grammetical error"; is this called UNDERSTANDING ??

      - Or, if HAYES-ROTH claims  "I am ERMAN", the system should say
        "No, You are not ERMAN" - I dont think that HEARSAY was supposedd
        to do this (it does not have Vision etc). But you will agree that
        that is also UNDERSTANDING. Note that the above claim by
        HAYES-ROTH would be true if :
              - he had changed his last name
              - he was merely QUOTING what ERMAN might have said somewhere
              - etc

So, could someone (the original authors of HEARSAY-II, perhaps)
respond to the question :
        In light of the above examples, what does it mean by
        saying that HEARSAY-II understands speech ?

Thank you.

-- raj
   Graduate student
   U. of Minnesota

   CSNET : doshi.umn-cs@csnet-relay


Reference : L.D.Erman, F.Hayes-Roth, Victor Lesser and D.R.Reddy
            "The Hearsay-II speech Understanding System : Integrating
                 Knowledge to resolve uncertainity."
            ACM Computing Surveys, Vol 12, No 2, June 1980.

------------------------------

Date: Thu Aug 23 14:47:43 1984
From: mclure@sri-prism
Subject: number-cruncher vs. humans: 9th move

[...]

The Machine Moves
-----------------
        Depth   Move    Time for search         Nodes      Machine's Estimate
        8 ply   cxd4   18 hours, 7 minutes    6.5x10↑7       +=


The Game So Far
---------------
1. e4    c5     6. Re1   a6
2. Nf3   d6     7. Bf1   e5
3. Bb5+  Nc6    8. d4    cxd4
4. o-o   Bd7    9. cxd4
5. c3    Nf6

Commentary
----------
  [...]

  Tli@Usc-Eclb, USCF ?
   Unfortunately, the voting will also keep out the inspired moves.  So
   we get an average game of all playing....

  SLOAN@WASHINGTON
   8. ...  b5
   It is worth noting a classical problem here in building a chess program:
   1) The machine was following its book until this move,
   2) As White, the machine should enjoy AT LEAST EQUALITY in the first
     position following "book" recommendations,
   3) However, having switched from "book" evaluation to its own
     opening/middle game evaluation, the machine now decides that it
     doesn't much like this position after all!
   There are several possibilities:
   0) Black is superior in the starting position (unlikely!)
   1) the book (at least this line) is inferior, and the machine should
      discard it (anyone out there think that the Prestige will do
      this?)
   2) the book is (objectively) correct, but this line does not match
     the playing "style" of the machine (i.e., the position is OK, but
     the machine doesn't know the correct thematic continuations, and
     hence will indeed find the position to be difficult.)
   This last possibility is most likely, and is not limited to machine
   play. Many human players have the same problem when they memorize
   columns and columns of analysis without understanding the REASONS for
   the evaluations at the ends of the columns.  This leads to post-mortem
   conversations of the form "That master isn't so strong; I had him
   CRUSHED in the opening...but he SOMEHOW escaped to a dead drawn
   ending - he didn't even know that it was theoretically drawn - he
   refused my draw offer! - I was so mad at him for that that I lost my
   concentration for 1 move and hung a piece."

  EWG@Cmu-Cs-Ps1, USCF ?
   The comment that the group of humans won't have a
   long term strategy is, I think, naieve.  It is just
   as easy for us to analyze lines of play (e.g.
   kingside vs queenside attack, try to trade off and
   queen a pawn, etc.) as it is for us to analyze the
   single position.  If anything it's somewhat easier,
   since we think about that anyway.  Why not solicit
   votes on that level as well and at least report the
   judgement (if not allowing it to directly choose the
   move at hand, which would be rash).  A suggestion
   for later in the game, at least.  This harkens back
   to memories of 10 or so years ago when I was still
   reading the chess books, and ran across a comment by
   one of the grandmasters (Sam Reshevski, I think?)
   who liked to play blitz and always used the style of
   spending a significant time thinking about lines of
   play at the start of the middle game.
   His strategy was to have the lines firmly in
   mind for later play.  The comment was that his
   opponents often got bored waiting for him to reply
   at that time and wasted the real time; he could then
   play at blitz pace much better as the game
   progressed and the opponent struggled for the right
   line(s) of play.  It also had the surface appearance
   of him putting himself deliberately
   in time trouble, which wasn't the case.

Replies to Arpanet: mclure@sri-unix or Usenet: sri-unix!mclure.

------------------------------

Date: Wed, 22 Aug 84 16:15:48 PDT
From: Guy M. Lohman <lohman%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - WYSIWYG Programming

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193


  Wed., Aug. 29 Computer Science Seminar
  10:00 A.M.  WYSIWYG PROGRAMMING
  2C-012      Though single-user workstation hardware has evolved
            rapidly to the point of rivaling the mainframes of a
            few years ago, software has generally failed to keep
            pace.  "What you see is what you get" (WYSIWYG)
            software for text processing and other applications
            has shown the feasibility of performing applications
            by direct manipulations of visual objects.  But
            programming languages are still based on a
            "typewriter" model of communication which has
            remained essentially unchanged since the 1950's.
            This model has now been antiquated by the advent of
            high resolution displays and accurate pointing
            devices.  WYSIWYG applications are often dramatically
            easier to use than their traditional command-based
            counterparts.  This talk will describe a project to
            design and prototype an interactive facility for
            building programs as WYSIWYG objects, by capturing
            direct manipulations of visual objects on a display
            screen.  The resulting programs are animations which
            act like virtual users, doing the same things that a
            real user can do.  Building these programs is totally
            continuous with normal hands-on manipulation of the
            objects, while writing programs in traditional
            programming languages is quite discordant with that
            process.

            D. Hatfield, IBM Cambridge Scientific Center
            Host:  D. Chamberlin

            [...]

------------------------------

End of AIList Digest
********************

∂28-Aug-84  2259	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #111    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Aug 84  22:59:24 PDT
Date: Tue 28 Aug 1984 21:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #111
To: AIList@SRI-AI


AIList Digest           Wednesday, 29 Aug 1984    Volume 2 : Issue 111

Today's Topics:
  Hardware - Touch Screen,
  Games - Chess Notation,
  Conferences - AAAI-84 Review,
  Hardware - Cellular Logic,
  AI Tools - Taxonomy Assistant,
  Speech Understanding - Hearsay II,
  Seminar - Speech Acts as Summaries of Plans
----------------------------------------------------------------------

Date: 20 August 1984 16:43-EDT
From: Roland Ouellette <ROLY @ MIT-MC>
Subject: Who knows about TOUCH SCREEN?

My group wants to buy a touch screen for our Symbolics 3600.  I would
appreciate any information about interfacing one to a 3600 or any
other machine.  Please, also, send me reviews, who's products are
great and who's aren't so hot, price, and anything else that you might
think of.  If you could send me information about who to get in touch
with, too, (i.e. address and/or phone) that would be fantastic.

Send mail to Roly at MIT-MC.ARPA

                        Many thanks in advance,
                        Roland Ouellette

------------------------------

Date: 26 August 1984 06:17-EDT
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: number-cruncher vs. humans: 9th move

query: is there a program that can convert from the algebraic
notation to descriptive notation?  I learned P-K4 and like that,
and there is no possibility that I will ever have an intuitive
feel for cxd4 and the like.  Can it be converted for those of us
who are algebraic cripples?

------------------------------

Date: 21 Aug 84 13:38:10-PDT (Tue)
From: ihnp4!mhuxl!mhuxm!sftig!sfmag!eagle!prem @ Ucb-Vax.arpa
Subject: AAAI-84 - a short subjective report
Article-I.D.: eagle.1187

The feelings evoked by the tremendous increase in interest, funding, media
participation and products available are best described by this excerpt from
W. B. Yeats :

"And what rough beast, its hour come round at last,
 Slouches toward Bethlehem to be born? "

                        - W. B. Yeats, from "The Second Coming"

allegra!eagle!prem

------------------------------

Date: 9 Aug 84 10:23:00-PDT (Thu)
From: hplabs!hp-pcd!hpfcnml!robert @ Ucb-Vax.arpa
Subject: Re: Hardware Implementations of Cellular
Article-I.D.: hpfcnml.3400002

The latest Discover magazine mentions a hardware implementation of
cellular automata in their article on the topic.  Interesting,
readily available, light reading.

                        -Robert (animal) Heckendorn
                        hplabs!hpfcla!robert

------------------------------

Date: Fri 24 Aug 84 22:26:52-EDT
From: Wayne McGuire <MDC.WAYNE%MIT-OZ@MIT-MC.ARPA>
Subject: Taxonomy Assistant

     The problems of systematically representing the conceptual
relations among a set of abstract objects in any knowledge domain go
right to the heart of much leading-edge AI research. All inferencing
is based, among other things, on implicit taxonomic understanding.

     It seems to me that in the knowledgebase management systems which
I hope we will see developed in the near future will be embedded rich
resources for evoking and representing taxonomies. Semantic nets
provide an ideal scheme with which to do just that.

     The most useful thinking about taxonomies and classification
theory appears not in the computer science literature, but at the
interface of library science, information science, and philosophy. The
leading journal in the field is ←International Classification← (which
should be available in any world class humanities library). It is
published (as I recall) three times a year, and is chockfull of
pointers to articles, books, dissertations, etc. in the world
literature on all aspects of classification theory.

     You might want to scan the following subject headings in some
recent editions of the index ←Library Literature← (published by H. W.
Wilson): Classification analysis, Subject headings, Thesauri. File 57
(Philosopher's Index) and File 61 (LISA -- Library and Information
Science Abstracts) on Dialog are also fertile sources of information
on the literature about taxonomies and classification theory. There
are many insights in the theoretical writings on classification theory
in the library science literature which could be handily transferred
to advanced AI research and systems.

     It occurs to me that what we need is a ←meta-taxonomy←, that is,
a thorough inventory of all the fundamental conceptual structures by
which objects in ←any← domain can be taxonomically related.

     One way a taxonomy assistant might operate is to combine each and
every significant term in a knowledge domain with every other term,
and offer a list of possible relations with which to tag each offered
matching set. Someday, perhaps, we will be able to buy off the shelf
"taxonomy packs" (dynamic thesauri) in many domains.

-- Wayne --

------------------------------

Date: Sat, 25 Aug 84 01:36 EDT
From: Sidney Markowitz <sidney%MIT-OZ@MIT-MC.ARPA>
Subject: Hearsay II question in AIList Digest   V2 #110


    Date: 22 Aug 1984 22:05:18-PDT
    From: doshi%umn-cs.csnet@csnet-relay.arpa
    Subject: Question about HEARSAY-II.

    I have a question about the HEARSAY-II system [Erman et.al.1980].

    What exactly is the HEARSAY system required/supposed to do ?
    i.e. what is the meaning of the phrase :
          "Speech Understanding system"

I am not familiar with the HEARSAY-II system, however I am answering
your question based on the following lines from the quotes you
provided, and some comments of yours that indicate that you are not
familiar with certain points of view common among natural language
researchers. The quotes:

(1)       page 213 : "The HEARSAY-II reconstructs an intention ....      "
(2)                   on the strong syntactic/semantic/task constraints
(3)                       - with a slightly artificial syntax and highly
                            constrained task
(4)                       - tolerating less than 10 % semantic error

  Researchers pretty much agree that in order to understand natural
language, we need an understanding of the meaning and context of the
communication. It is not enough to simply look up words in a
dictionary, and/or apply rules of grammar to sentences. A classic
example is the pair of sentences:   "Time flies like an arrow." and
"Fruit flies like a banana." The problem with speech is even worse --
It turns out that even to separate the syllables in continuous speech
you need to have some understanding of what the speaker is talking
about! You can discover this for yourself by trying to hear the sounds
of the words when someone is speaking a foreign language. You can't
even repeat them correctly as nonsense syllables.
  What this implies is an approach to speech recognition that goes
beyond pattern recognition to include understanding of utterances.
This in turn implies that the system has some understanding of the
"world view" of the speaker, i.e., common sense knowledge and the
probable intentions of the speaker. AI researchers have attempted to
make the problem tractable by restricting the "domain" of a system. A
famous example is the "blocks world" used by Terry Winograd in his
doctoral thesis on a natural langugage understanding system, SHRDLU.
All SHRDLU knew about was its little world of various shapes and
colors of blocks, its robot arm and the possible actions and
interactions of those elements. Given those limitations, and the
additional assumption that anything said to it was either a question
about the state of its world or else a command, Winograd was able to
devise a system in which syntax, semantics and task performance all
interacted. For example, an ambiguity in syntax could be resolved if
only one grammatical interpretation made semantic sense.
 You can see how this approach is implied by the four quotes above.
With this as background, lets proceed to your questions...


    Let me explain my confusion with examples. Does the system do one of the
    following :
          - 1) Accepts speech as input; Then, tries to output what (ever) was
              spoken or might have been spoken ?
          - 2) Or, accept speech as input and UNDERSTAND it ?
    Now, the 1) above is, I think speech RECOGNITION. DARPA did not want just
    that.

    Then, what is(are)  the meaning(s) of UNDERSTAND ?
          - if I say "Alligators can fly", should the system repeat this and
            also tell me that that is "not true"; is this called UNDERSTANDING?
          - if I say "I go house", should the system repeat this and also add
            that there is a "grammetical error"; is this called UNDERSTANDING?
          - Or, if HAYES-ROTH claims  "I am ERMAN", the system should say
            "No, You are not ERMAN" - I dont think that HEARSAY was supposedd
            to do this (it does not have Vision etc). But you will agree that
            that is also UNDERSTANDING. Note that the above claim by
            HAYES-ROTH would be true if :
                  - he had changed his last name
                  - he was merely QUOTING what ERMAN might have said somewhere
                  - etc

            In light of the above examples, what does it mean by
            saying that HEARSAY-II understands speech ?


  The references to "tasks" in the quotes you provided are a clue that
the authors are thinking of "understanding" in terms of the ability to
perform a task that is requested by the speaker. The examples in your
questions are statements that would need to be reframed as tasks. It
is possible that the system could be set up so that a statement like
"Alligators can fly" is an implied command to add that fact to the
knowledge base, perhaps first checking for contradictions. But you
probably ought to think of an example of a restricted task domain
first, and then think about what "understanding" would mean in that
context. For example, given a blocks world domain the system might
respond to a statement such as "Place a blue cube on the red pyramid"
by saying "I can't place anything on top of a pyramid". There's much
that can be done with modelling the speaker's intentions and
assumptions which would affect the sophistication of the resulting
system, but that's the general idea.

-- Sidney Markowitz <sidney%mit-oz@mit-mc.ARPA>

------------------------------

Date: 28 Aug 1984 15:31-EDT
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Speech Acts as Summaries of Plans

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


                  Speech Acts as Summaries of Plans

                            Phil Cohen

                        SRI International
                               and
            Center for the Study of Language and Information
                         Stanford University


BBN AI Seminar
10:30 a.m. on Wednesday, September 5th
Third floor large conference room at 10 Moulton St., Cambridge.

Many theories of communication require a hearer to determine what
illocutionary act(s) (IA's) the speaker performed in making each
utterance.  This talk will sketch joint work, with Hector Levesque,
that aims to call this presumption into question, at least for some
kinds of illocutionary acts.  Such acts will be shown to be definable
on a "substrate" of interacting plans --- i.e., as beliefs about the
conversants' shared knowledge of the speaker's and hearer's goals and
the causal consequences of achieving those goals.  In this formalism,
illocutionary acts are no longer conceptually primitive, but rather
amount to theorems that can be proven about a state-of-affairs.  The
important point here is that the definition of, say, a request is
derived from an independently-motivated theory of action, rather than
stipulated.  Just as one need not determine if a proof corresponds to
a prior lemma, a hearer need not actually characterize the
consequences of each utterance in terms of the IA theorems, but may
simply infer and respond to the speaker's goals.  However, the hearer
could retrospectively summarize a complex of utterances as satisfying
an illocutionary act.

This move of defining illocutionary acts in terms of  plans may
alleviate a number of technical obstacles in applying speech act
theory to extended discourse.  It formally characterizes a range of
indirect requests  in terms of conversants' plans, and demonstrates
how certain conventionalized forms can be derived from and integrated
with plan-based reasoning.  Finally, it gives a formal foundation to the
view that speech act characterizations of discourse are not
necessarily those of the conversants but rather are the work of the
theorist.

------------------------------

End of AIList Digest
********************

∂31-Aug-84  1217	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #112    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 31 Aug 84  12:16:31 PDT
Date: Fri 31 Aug 1984 10:47-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #112
To: AIList@SRI-AI


AIList Digest            Friday, 31 Aug 1984      Volume 2 : Issue 112

Today's Topics:
  Books - AI Handbook,
  LISP - VMS LISPs,
  AI Tools - Metataxonomies,
  Speech Understanding - Word Recognition,
  Natural Language - No Crime Rate & DWIM
----------------------------------------------------------------------

Date: Wed 29 Aug 84 12:56:04-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Lib of CS intro offer: Handbook of AI Vols 1-3 for $5

In Sept issue of Scientific American.

Club members agree to buy 3 other books during the next year.  Given that
one can buy books priced under 20$, it's a bargain, any way you look at it.
Having to send in their monthly reply card is a nuisance, of course, but
quickly buying 3 books cuts that short.  And I have no doubt, that anyone
serious about CS can find 3 interesting books in their 'Recent Selections'
catalogue.  Over the years, I bought dozens, which makes me 'a satisfied
customer', I guess.  Other than that, I have no connections ....

------------------------------

Date: 29 Aug 1984 12:50-EST
From: Todd.Kueny@CMU-CS-G.ARPA
Subject: VMS LISPs

> I would appreciate information about LISP interpreters for a VAX
> 11/780 running VMS.  Thanks in advance.

We use something called PSL (Portable Standard Lisp) from Univ. of Utah.
It has both a compiler and interpreter and *opinion* seems to be a heck
of a lot faster and far more efficient than DEC COMMON LISP.  We have
a version we created from Utah's Vax UNIX version; I think Utah will
have a VMS version of their own very soon.

PSL has a COMMON LISP compatability package, an object oriented
programming facility, and loads of other handy stuff.  Unlike COMMON
LISP PSL has a fixed sized heap with a two state garbage
collector.  A properly tuned PSL can be very fast (better
than C in many cases) and five or six can be
run at one time (while still doing other things).  Three DEC COMMON
LISPs can bog down a VMS 780 system.

                                                        -Todd K.
                                                        Unilogic

------------------------------

Date: Wed 29 Aug 84 09:31:47-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Re: Taxonomies

Some of the most recent KR systems attempt to provide meta-taxonomies;
I know of RLL/Eurisko, MRS, and AGE, all Stanford products.  Am not
sure what LOOPS provides in the way of knowledge about representation
schemes (although one could build something to recommend whether a given
piece of information should be a logical assertion, an object, an instance
variable of an object, Lisp code, etc).

Meta-taxonomies are HARD.  The ability to create a taxonomy of some body of
knowledge implies that one has both a deep and broad understanding of that
body.  The creation of a meta-taxonomy implies that there is a similar
level of understanding for many issues in knowledge representation, which
is definitely *not* the case.  We're still lacking adequate theories of
multiple inheritance, nor have we plumbed the depths of strange logical
systems.  Looking at library science is an interesting idea;  while I
imagine that many of the classification schemes are informal (probably
relying on human judgement), librarians have been classifying massive
databases (books) for a long time.

Moving farther afield, taxonomies in other AI areas are lacking.  I asked
a while back about taxonomies for rule systems, and found that there was
about one paper, by Davis and King in a ca. 1976 MI.  This, however, was
an informal taxonomy, and not particularly susceptible to mechanization.
Am still waiting for a tree that puts OPS5, Emycin, and Prolog on
different leaves...

                                                                stan

------------------------------

Date: Wed 29 Aug 84 10:12-EDT
From: Aaron F. Bobick <AFB%MIT-OZ@MIT-MC.ARPA>
Subject: Understanding speech versus hearing words

About speech recognition:

        From: Sidney Markowitz <sidney%MIT-OZ@MIT-MC.ARPA>

        It turns out that even to separate the syllables in continuous speech
        you need to have some understanding of what the speaker is talking
        about! You can discover this for yourself by trying to hear the sounds
        of the words when someone is speaking a foreign language. You can't
        even repeat them correctly as nonsense syllables.
          What this implies is an approach to speech recognition that goes
        beyond pattern recognition to include understanding of utterances.
        This in turn implies that the system has some understanding of the
        "world view" of the speaker, i.e., common sense knowledge and the
        probable intentions of the speaker.....


Many psycho-lingists would dispute this.  The problem with the foreign
language example is that you don't recognize WORDS, not that you don't
understand the utterance (for now let us define understanding as
building some sort of SEMANTIC model, the details don't matter).
Consider the classic: "Green ideas sleep furiously."  I doubt one can
"understand" this in any plausible way yet it's encoding is easy.
Even if one removes grammar and is listening to a randomized listing
of Websters dictionary, one can easily parse the string into syllables
and words.

In fact, *except under noise conditions much worse than normal
conversation*, there is psycho-linguistic evidence that context does
not greatly affect word recognition by humans in terms of the parsing
of the input signal.  ....

(I am over simplifying a little; there is also evidence that
context can help you make judgements about incoming words and
syllables.  However, this may be a post-access phenomena, sort of a
surprise effect when an anomalous word or syllable is encountered; the
jury is still out.   Regardless, it is certainly reasonable to consider
a context independent word recognition system. )

.....  Therefore, it is clearly possible to consider speech
*recognition* as separate from understanding.  Hearsay (I or II) does
not; some psychologists (and, by the way, many AI speech hackers) do.


Stuck in the middle again ......  aaron bobick (afb%mit-oz@mit-mc)

------------------------------

Date: Thu 30 Aug 84 04:59:09-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Re: On having virtually no crime rate.

RE:      "Saudi Arabia has virtually no crime rate," (Olympic commercial)

Every time I heard it, there was this little alarm going off in my head
saying: "Think about it.  There is something wrong here."   To which my
semi-automatic stress-reduction program (always running in the "background")
responded:  "Don't think about it.  This is just another unimportant question
which, at most, is going to mess up the priorities of other, more important
tasks you have to worry about."

Of course, now Wayne has found the "weak spot" in my 'semi-automatic ...'

Turning to Bantam for enlightenment, I find:

virtual         [ML virtualis]
                adj  existing in effect though not in name or fact

virtually       adv  almost; for the most part

And, out of habit, I double-check in Webster (the Time-freeby with nearly
unreadable typeset) and get rewarded with:

virtual         adj  equivalent to, though somewhat different or deficient
                - virtuality, n.


That didn't put me at ease at all,  and I grabbed the "New American Computer
Dictionary" (by Ken Porter) .... well, excuse me, the computer was involved
in presenting me with Wayne's article, right ?

virtual         Giving an appearance of being without actually; an important
                concept in medium- to large-scale data-processing systems,
                in which virtual techniques "trick" the computer system or
                a program into "believing" that there are more resources
                available than there actually are. For further discussion,
                see 'virtual machine', 'virtual storage'

Aha, me thinks, the Saudis must have applied a new police technique, where
something or someone is doing some 'tricky' stuff ...  but wait, '... being
without actually' ???  Wasn't it the other way around?  Better check that
in German - and Langenscheidt says...:

virtual         dem Wesen nach, eigentlich

well, that doesn't help much, so I do a 'reverse check' to see what I come
up with (an important technique. with often surprising results, remember
the first Russian automatic translations???)

eigentlich      (genau) proper; (tatsaechlich) actual; (wirklich) true, real;
                (dem Wesen nach) virtual;
                adv  properly; actually; really; (genau gesagt) properly
                speaking; 'das ~e London'  London proper; 'Ich bin ~ froh'
                AmF I am sort of glad; 'was wollen Sie ~?'  what do you want?

See what I mean?  Is it 'real' now or 'virtual'?  'What do you want?'

Well, you have to endure my excursion into Spanish, too, but I spare you the
Latin.:  'Diccionario Larousse del Espanol Moderno' says:

virtual         adj  Posible, que no tiene efecto actual. || 'Fis' Que tiene
                existencia aparente pero no real: 'imagen, objecto virtual'

virtualidad     f. Posibilidad.

(dabbling in this 'foreign' mumble makes me wish that everyone had a Mac
 so I could use the proper foreign characters, like 'Umlauts' in German, etc.
 of course, there'd still be the 'minor' problem of making the main-frames
 cooperate, of course .....)


What's the point of all this?  Well, that's the ultimate test for AI.
You folks can go off now to write a program which will understand, when
the Arabs and I together throw our hands up into the air and say:

        WELL,  YOU KNOW WHAT WE MEANT TO SAY .....


But seriously now, folks ....

Maybe, DWIM is the real test of artificial intelligence, no more worries
about proper spelling, syntax, or semantics.  No more error messages
from compilers, no more bugs in programs. For that matter, no more machines
with a habit of crashing.  Why limit ourselves to require AI to be able to do
what we humans can do?  Our 'REAL' intelligence is so bug-ridden that we
are on the verge of self-extinction as a result of our progress.  What I'd
like to see is a Master-Robot of the world, programmed to DWIM (do what I mean)
with one overriding GOAL:

        BUT NO MATTER WHAT I SAY OR DO - DON'T ALLOW ME TO SELF-DESTRUCT !!!

The idea that the human drive to survive has left us with 'defensive'
weapons which will most likely guarantee our ultimate and definite
demise from this universe makes me want to 'stop the world and get off'.

It makes me SO angry to know that most of this AI-stuff is being developed
to make better instruments of killing and destruction (looked at who's
doing the funding, lately?) - and so sad at the same time, knowing that
the last thing the people doing the developing want, is to help blowing
up the world, or any small part of it, for that matter.  I am afraid that
the AI-community will find itself in a similar situation one day, as the
nuclear phycisists, asking themselves the question:

"But how could we have prevented it?"  There is one thing to be learnt
from the Nueremberg trials after WW2:  There is no sympathy earned with
this statement.

BTW, for all you Commie-hunters out there, I'm as suspicious as the next fellow
of Russian intensions, but I think that these days it's more likely that some
lunatic from some smaller country (no need to focus on anyplace in particular,
really)  will light the match which will lead to the ultimate explosion.
What I am concerned about is, is the fact that we all cooperated in developing
the technology which makes the blast so effective and deadly.  And ....
         'being sorry'  isn't going to do a damn bit of good !

---------------------------------------------------------------------------

Well, Wayne, aren't you as sorry as I that you got me started?
I know that wasn't your intension, but ... so what?

...  off and fix my 'semi-automatic...' so people like Wayne will have
a harder time messing with the priorities of things I need to do do ....

------------------------------

End of AIList Digest
********************

∂02-Sep-84  2241	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #113    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 Sep 84  22:40:54 PDT
Date: Sun  2 Sep 1984 21:37-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #113
To: AIList@SRI-AI


AIList Digest             Monday, 3 Sep 1984      Volume 2 : Issue 113

Today's Topics:
  Humor - Eliza Passes Turing Test (again),
  AI Tools - Taxonomy Assistant & User Aids,
  Psychology - User Modeling,
  Speech Recognition - Separating Syllables,
  Conferences - Functional Languages
----------------------------------------------------------------------

Date: 29 Aug 84 18:22:02-PDT (Wed)
From: decvax!minow @ Ucb-Vax.arpa
Subject: Eliza Passes Turing Test (again)
Article-I.D.: decvax.59

Readers of net.ai might enjoy this extract from "Computing Across
America,"  Chapter 11: A High-tech Oasis in the Texas Sun, written by
Stephen K. Roberts, published originally in (and Copyright 1984 by)
Online Today, September 1984 (a CompuServ Publication).

                        The Phantom SysOp

  (Austin SYSOP speaking)

        "Personally, I get a little tired of answering CHAT
        requests.  That's why I brought up Eliza."

        "You mean..."

        He twinkled with wry humor.  "You got it.  It's the
        perfect Turing test.  I have a second computer hooked
        up to my board system.  When someone issues a CHAT
        request, it says 'Hello?  Can I help you?' I changed
        all the messages so it emulates a sysop instead of a
        psychiatrist.  Some people never do catch on."

        I groaned.  "That's cruel!"

(Transcribed by my collegue, John Wasser.)

Martin Minow
decvax!minow

------------------------------

Date: 31 Aug 1984 11:31:36 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: reply to McGuire about taxonomy assistant


(Reply to Wayne McGuire's comments on the need for a taxonomy assistant: )

I agree with the notion that representing and using conceptual relations
effectively  is one of the central problems of AI research.  You say

     "It seems to me that in the knowledgebase management systems which
     I hope we will see developed in the near future will be embedded rich
     resources for evoking and representing taxonomies. Semantic nets
     provide an ideal scheme with which to do just that."


How do we know that semantic nets are so good?  Isn't this a complex
unsolved problem, for which the effectiveness of semantic nets is still
an open issue?

I suspect that semantic nets are useful for these problems just as
binary notation is useful.  The representative power is there, but
success depends not so much on the distinctive properties of nets as on
the techniques that create and use the nets.  I agree that they look
promising.  (Promises, promises.)

You suggest that a taxonomy assistant might work by operating on the
vocabulary of the domain, relating items.  That sounds like another
promising idea that might lead to a very powerful set of
generalizations if it was tried.

In the case that prompted all this, there is no recognized domain or
literature.  I have an experimental program which includes a specialized
internal interface language having several hundred predefined operators.
Doing a taxonomy is one way to increase my understanding of the
interface language.  So I would like to have a taxonomy assistant that
did not have to presume a domain.

Bill Mann

------------------------------

Date: 31 Aug 1984 12:20:08 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: aids to the mind

I've gotten several interesting replies to my inquiry about finding a
"taxonomy assistant"  that could help me in thinking about the
organization of a collection of items.  It raises a larger issue:

        What intellectual operations are worth developing programmed
aids for?

Nobody came up with a pointer to an existing program for the taxonomy
task (except for something named PEGASUS, on the related topic of
vocabulary construction; I need to check it out.)  But still, there
might be other sorts of programmed assistants out there.

Here is a wish list for programmed assistants that could potentially be
important for my lifestyle:

RESOURCE ALLOCATION ASSISTANT:  Given a supply or a flow of resources,
help allocate them to particular uses.  Budgeting, personal time
allocation and machine scheduling are special cases.

TIME ALLOCATION ASSISTANT:  (a specialization, very important to me)
Help work through allocation of my time so that feasible things get
done, infeasible things don't get started, the things that get done are
the important ones,  things tend to get done on time,  allocations get
revised appropriately in the face of change, and the allocation provides
suitable flexibility and availability to other people.

I have in mind here much more than just the scratchpad-and-alarm-clock
kind of time allocation resource.  Those are fine as far as they go, but
they don't go nearly deep enough.  I want something that will ask me the
pertinent questions when they are timely.

EXPOSITORY WRITING ASSISTANT:  In this case, my research on text
generation has gone far enough to assure me that such a program is
feasible.  I have a systematic manual technique that works pretty well
for me, that could be developed into an interactive aid.  It would be
very different from the sentence-critic sort of programs that are now
emerging.

NEGOTIATION ASSISTANT:  There is a viewpoint and a collection of skills
that are very helpful in bargaining to an agreement.  A program could
raise a lot of the right questions.

                              ***

That is just a starter list.  What other sorts of assistants can we
identify or conceive of?

Other ideas can probably be developed from the problem-solving
literature, e.g. Polya, Wickelgren and Lenat.

This sort of thing could go far beyond the capabilities of autonomous AI
programs.  Often there are well known heuristics that are helpful to
people but too indefinite for programs to apply; an assistant could
suggest them.  Proverbs are one sort.

In sum, What do we want, and What do we have?

Bill Mann

------------------------------

Date: 29 Aug 84 14:21:56-PDT (Wed)
From: hplabs!hpda!fortune!amd!decwrl!dec-rhea!dec-bartok!shubin @
      Ucb-Vax.arpa
Subject: Replies to query for citations on user modeling
Article-I.D.: decwrl.3473

I posted a request for some papers on modeling users and/or user behavior,
and promised to post the results of my query.  (The original posting was on
or about 18 July 84).  Here is a summary of the results; a line of hyphens
separates one person's response from another.  I haven't had to check all
of them, and I may wind up with more references, which may be posted
later.  Any more suggestions are welcome.  Thanks to all.

------
Elaine Rich, "Users are Individuals: Individualizing User Models"
        Int.J.Man-Machine Studies 18(3), March, 1983.
Zog project at CMU
Elliot Soloway at Yale -- modeling novice programmer behavior
"The Psychology of Human-Computer Interaction" by Card, Moran, and Newell
Current work at Penn by Tim Finin, Ethel Shuster, and Martha Pollock
             at UT at Austin by Elaine Rich
Work on on-line assistance:
        Wizard program by Jeff Shrager and Tim Finin (AAAI 82)
        Integral help by Fenchel and Estrin
        Lisp tutor - John Anderson at CMU
------
Regarding users' models of computer systems:
a.      Schneiderman, B. and Meyer R. "Syntactic/Semantic Interactions
        in Programmer Behaviour: A Model and Experimental Results"
        Int. J. of Computer and Information Sciences, Vol 8, No. 3, 1979
b.      Caroll, J.M., and Thomas, J.C. "Metaphor and the Cognitive
        Representation of Computing Systems" IEEE Trans. on Systems,
        Man, and Cybernetics, Vol SMC - 12, No. 2, March/April 1982.
c.      Anything from the CHI'83 conference -- Human Factors in
        Computing Systems sponsored by ACM.
About Modelling the User:
a.      Card, Newell and Moran, a book whose title escapes me
        offhand -- it has a chapter entitled The human Information
        Processor.
b.      Rich, E. "Users are Individuals: Individualizing user Models"
        Int. J. Man-Machine Studies 18, 1983
--------
Peter Polson (U.COlorado) and David Kieras (U.Arizona) have a paper in this
year's Cognitive Science Conference on a program that tests user interfaces
by testing high-level descriptions of user behavior against expected system
behavior.
--------
There was a lot of work done at Xerox PARC in the late
70's on task times and such.   They were interested
in work relating to I/O device design (mice, etc.), as
well as general models.  Some very good task timing
models came out of that work, I believe.
-------
Take a look at the work of Elaine Rich at Texas (formerly CMU).
-------
Chapter 6,The Psychology of Human-Computer Interaction,SK Card,
  SP Moran,A Newell
-------
...Some of the results of this are published in the 1983 AAAI Proceedings
in the paper "Learning Operator Semantics by Analogy" by S. Douglas
and T. Moran.

"A Quasi-Natural Language Interface to UNIX"; S. Douglas; Proceedings of
the USA-Japan Conference on Human-Computer Interaction; Hawaii; 18-20 Aug
84; Elsevier.

------------------------------

Date: 31 Aug 84 13:05:13-PDT (Fri)
From: ihnp4!houxm!mhuxl!ulysses!burl!clyde!watmath!utzoo!dciem!mmt @
      Ucb-Vax.arpa
Subject: Re: Hearsay II question in AIList Digest   V2 #110
Article-I.D.: dciem.1098


    It turns out that even to separate the syllables in continuous speech
    you need to have some understanding of what the speaker is talking
    about! You can discover this for yourself by trying to hear the sounds
    of the words when someone is speaking a foreign language. You can't
    even repeat them correctly as nonsense syllables.

I used to believe this myth myself, but my various visits to Europe for
short (1-3 week periods, mostly) trips have convinced me otherwise. There
is no point trying to repeat syllables as nonsense, partly because the
sounds are not in your phonetic vocabulary.  More to the point, syllable
separation definitely preceded understanding.  I HAD to learn to separate
syllables of German long before I could understand anything (I still
understand only a tiny fraction, but now I can parse most sentences
into kernel and bound morphemes because I now know most of the common
bound ones).  My understanding of written German is a little better,
and when I do understand a German sentence, it is because I can transcribe
it into a visual representation with some blanks.

(Incidentally, I also do some research in speech recognition, so I am
well aware of the syllable segmentation problem.  There do exist
segmentation algorithms that correctly segment over 95% of the syllables
in connected speech without any attempt to identify phonemes, let
alone words or the "meaning" of speech.  Mermelstein, now in Montreal,
and Mangold in Ulm, Germany, are names that come to mind.)
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsrgv!dciem!mmt

------------------------------

Date: Wed 29 Aug 84 10:53:53-EDT
From: Joseph E. Stoy <JES@MIT-XX.ARPA>
Subject: Call For Papers

CALL FOR PAPERS

          FUNCTIONAL PROGRAMMING LANGUAGES AND COMPUTER ARCHITECTURE
                          A Conference Sponsored by
           The International Federation for Information Processing
                        Technical Committees 2 and 10

                                Nancy, France
                           16 to 19 September, 1985


This conference has been planned as a successor to the highly successful
conference on the same topics held at Wentworth, New Hampshire, in October
1981.  Papers are solicited on any aspect of functional or logic programming
and on computer architectures to support the efficient execution of such
programs.

Nancy, in the eastern part of France, was the city of the Dukes of Lorraine; it
is known for its "Place Stanistlas" and its "Palais Ducal".  "Art Nouveau"
started there at the beginning of this century.  There are beautiful buildings
and museums and, of course, good restaurants.

Authors should submit five copies of a 3000 to 6000-word paper (counting a full
page figure as 300 words), and ten additional copies of a 300-word abstract of
the paper to the Chairman of the Programme Committee by 31 January 1985.  The
paper should be typed double spaced, and the names and affiliations of the
authors should be included on both the paper and the abstract.

Papers will be reviewed by the Programme Committee with the assistance of
outside referees; authors will be notified of acceptance or rejection by 30
April 1985.  Camera-ready copy of accepted papers will be required by 30 June
1985 for publication in the Conference Proceedings.

Programme Committee:
        Makoto Amamiya (NTT, Japan)
        David Aspinall (UMIST, UK)
        Manfred Broy (Passau University, W Germany)
        Jack Dennis (MIT, USA)
        Jean-Pierre Jouannaud (CRIN, France)
        Manfred Paul (TUM, W Germany)
        Joseph Stoy (Oxford University, UK)
        John Willliams (IBM, USA)

Address for Submission of Papers:
        J.E. Stoy, Balliol College, Oxford OX1 3BJ, England.

Paper Deadline:  31 January 1985.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To receive a copy of the advance programme, return the following information to
J.E. Stoy, Balliol College, Oxford OX1 3BJ, England
or by electronic mail to JESTOY@UCL-CS.ARPA

I plan to submit a paper: [ ]
        Subject:
Name:
Organisation:
Address:

------------------------------

End of AIList Digest
********************

∂05-Sep-84  1121	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #114    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 Sep 84  11:21:01 PDT
Date: Wed  5 Sep 1984 09:20-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #114
To: AIList@SRI-AI


AIList Digest           Wednesday, 5 Sep 1984     Volume 2 : Issue 114

Today's Topics:
  LISP - LISP for the Eclipse 250 with RDOS,
  Expert Systems - AGE Contact? & Programmed Assistants,
  Speech Understanding - Word Recognition,
  Philosophy - Now and Then,
  Seminars - Bay Area Computer Science,
  Conference - IJCAI-85 Call for Papers
----------------------------------------------------------------------

Date: 4 Sep 1984 9:00-EDT
From: cross@wpafb-afita
Subject: LISP for the Eclipse 250 with RDOS

I recently joined a group here doing low level pattern recognition work
applied to speech recognition and image processing. We have an Eclipse
250 running the RDOS operating system. We also have C (unix version 7
compatable). Does anyone out there know of a dialect of LISP that can be
used with this system? Any suggestions? Please respond to the address
listed below. Thanks in advance.

Steve Cross
cross@wpafb-afita

------------------------------

Date: 4 Sep 84 10:29 PDT
From: Feuerman.pasa@XEROX.ARPA
Subject: AGE:  Who to contact?

I'm interested in looking into AGE, which is quoted as being a "Stanford
product."  Does anyone have the name and phone number of who to contact
to obtain things, such as manuals, users guides, etc.  Thanks in
advance.

--Ken <Feuerman.pasa@Xerox.ARPA>.

------------------------------

Date: Wed, 5 Sep 84 15:59 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: programmed assistants

In response to Bill Mann's list of desirable mechanised assistants,
one of our graduate students urgently wants to know: if he drops
everything else and writes a thesis-writing assistant, will he get
a PhD for it?
Tony Hasemer.

------------------------------

Date: 4 Sep 84 09:56 PDT
From: Feuerman.pasa@XEROX.ARPA
Subject: Understanding speech vs. hearing words

The subject has come up about whether one need understand the semantics
of an utterance before one can recognize words, or even syllables.
While it seems a bit of research has been cited for both sides, I
thought it would be interesting to offer an experience of mine for
evidence:

I was travelling in Italy, and it was that time of the evening again,
time to find our daily ration of gelato (Italian ice cream)!  Our search
brought us into a bar of sorts, with Paul Simon's (I think it was Paul
Simon) recording of "Slip Sliding Away" playing in the background.  The
bartender was singing along, only it didn't quite come out right.  What
he was singing was more like "Sleep Sliding Ayway" (all of the vowels
being rather exagerated).  I regret that I had no way of knowing whether
he had seen the words written down before (which could account for some
of his mis-pronunciations), but it was pretty clear that he had no idea
of the meaning of what he was singing.


--Ken.

[It seems to me that the same sort of anecdote could be told of any
child; they frequently store and repeat phrases that are to them
merely nonsense (e.g., the alphabet, especially LMNOP).  More to the
point, a good first step in learning any new oral language is to listen
to it, sans understanding, long enough to begin to identify syllables.
This greatly simplifies later word drills since the student can then
grasp the phonetic distinctions that the teacher considers important
(and obvious).  The implication for speech understanding is that it is
indeed possible to identify syllables without understanding, but only
after some training and the development of fairly sophisticated
discriminant capabilities.  -- KIL]

------------------------------

Date: Fri, 31 Aug 84 15:53 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: Now and Then

(Tony Hasemer challenges Norm Andrews' faith about cause and effect)

You say: "logical proof involves implication relationships between
discrete statements...causality assumes implication relationships
between discrete events".

Don't think me just another rude Brit, but:-

     > in what sense is a statement not an event
         A statement (as opposed to a record of a statement,
         which is a real-world object) takes place in the
         real world and therefore is an event in the real
         world.

     > what do you mean by "implication"
         This is the nub of all questions about cause and
         effect, and of course the word subsumes the very
         process it tries to describe.  One can say "cause
         and effect", or "implication", or "logically
         necessary", and mean ALMOST the same thing in each
         case.  They all refer to that same intangible feeling
         of certainty that a certain argument is valid or that
         event B was self-evidently caused by event A.

     > what do you mean by "relationship"
         Again, this is a word which presumes the existence
         of the very link we're trying to identify.


   May I suggest the following-

   The deductive logical syllogism (the prototype for all
infallible arguments) is of the form

     All swans are white.
     This is a swan.
     Therefore it is white.

Notice that the conclusion (3rd sentence) is only true iff the two
premises (sentences 2 and 3) are true.  And if you can make any
descriptive statement beginning "All..." then you must be talking
about a closed system.
   Mathematics, for example, is a set of logical statements about
the closed domain of numbers.  It is common, but on reflection rather
strange, to talk about "three oranges" when each orange is unique and
quite different from the rest.  It is clear that we impose number
systems on the real world, and logical statements about the square
root of the number 3 don't tell us whether or not there is a real
thing called the square root of three oranges.
   I'm saying that closed systems do not map onto the real world.
Mathematics doesn't, and nor does deductive logic (you could never
demonstrate, in practice, the truth of any statement about ALL of a
class of naturally-occurring objects).
   On the contrary, the only logic which will in any sense "prove"
statements about the real world (such as that the sun will rise tomorrow)
is INDUCTIVE logic.  Inductive logic and the principle of cause and
effect are virtually synonymous.  Inductive logic is fuzzy (deductive
logic is two-valued), and bootstraps itself into the position of
saying: "this must be true because it would be (inductively) absurd to
suppose the contrary".
   There is no real problem, no contradiction, between the principle
of cause and effect and deductive logic.  There is merely a category
mistake.  The persuasive power of deduction is very appealing, but
to try to justify an inductive argument (e.g. causality) by the
criteria of deductive arguments is like trying to describe the colour
red in a language which has no word for it.  We just have to accept that
in dealing with the real world the elegant and convenient certainties
of the deductive system do not apply.  The best logic we have is
inductive: if I kick object A and it then screams, I assume that it
screamed BECAUSE I kicked it.

   If repeated kicking of object A always produces the concomitant
screams, I have two choices: either to accept the notion of causality,
or to envisage the real world as being composed of a vast series of
arbitrary possibilities, like billions of tossed pennies which only
by pure chance have so far happened always to come down heads.  Personally,
I much prefer a fuzzy, uncertain logic to a chaos in which there is no
logic at all!  Belief in causality, like belief in God, is an act of
faith: you can't hope to PROVE it.  But whichever one chooses, it doesn't
really matter: stomachs still churn and cats still fight in the dark.
The very best solution to the problem of causality is to stop worrying
about it.

     Tony.

------------------------------

Date: 04 Sep 84  1424 PDT
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: Seminars - Abstracts for BATS

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The next Bay Area Theory Seminar (aka BATS) will be at Stanford, this Friday,
7 September.

The talks (and lunch) will take place in Room 200-305. This is a room on the
third floor of History Corner, the NE corner of the Stanford Campus Quadrangle.

The schedule:

10:00am         U. Vazirani (Berkeley):
                "2-Processor Scheduling in Random NC"

11:00am         R. Anderson (Stanford):
                "A P-complete Problem and Approximations to It"

noon:           Lunch

1:00pm          E. Lawler (Berkeley):
                "The Traveling Salesman Problem Made Easy"

2:00pm          A. Schoenhage (Tuebingen, IBM San Jose):
                "Efficient Diophantine Approximation"


*****************************************************************************

ABSTRACTS:

10:00am:        U. Vazirani:

                    "2-Processor Scheduling in Random NC"

(joint work with D. Kozen and V. Vazirani)

The Two-Processor Scheduling Problem is a classical problem in Computational
Combinatorics, and several efficient algorithms have been designed for it.
However, these algorithms are inherently sequential in nature. We give a
randomizing poly-log time parallel algorithm (run on a polynomial number of
processors). Interestingly enough, our algorithm for this purely
combinatoric-looking problem draws on some powerful algebriac methods.  The
Two-processor Scheduling problem can be stated as follows:

Given a set S of unit time jobs, and a partial order specifying precedence
constraints among them, find an optimal schedule for the jobs on two identical
processors.


11:00am:        R. Anderson (Stanford):

            "A P-complete Problem and Approximations to It"

The P-complete problem that we will consider is the High Degree
Subgraph Problem.  This problem is: given a graph G=(V,E) and an integer k,
find the maximum induced subgraph of G that has all nodes of degree at least
k.  After showing that this problem is P-complete, we will discuss two
approaches to finding approximate solutions to it in NC.  We will give a
variant of the problem that is also P-complete that can be approximated to
within a factor of c in NC, for any c < 1/2, but cannot be approximated by a
factor of better than 1/2 unless P=NC.  We will also give an algorithm that
finds a subgraph with moderately high minimum degree.  This algorithm exhibits
an interesting relationship between its performance and the time it takes.



 1:00pm:        E. Lawler (Berkeley):

                 "The Traveling Salesman Problem Made Easy"

    Despite the general pessimism resulting from both theory and
practice, the TSP is not necessarily a hard problem--there are many
interesting and useful special cases that can be solved efficiently.
For example, there is an efficient procedure for finding an optimal
solution for the bottleneck TSP in the case that the distance matrix
is "graded." This result will be used to show how to solve a problem
of great practical importance to paperhangers: how to cut sheets from
a long roll of paper so as to minimize intersheet wastage.

    Material for this talk is drawn from a chapter, by P. Gilmore,
E.L. Lawler, and D.B. Shmoys, of a forthcoming book, The Traveling
Salesman Problem, edited by Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan,
and D.B. Shmoys to be published by J. Wiley in mid-1985.


 2:00pm:        A. Schoenhage (Tuebingen, IBM San Jose):

                    "Efficient Diophantine Approximation"

Abstract: Given (a←1,...,a←n) in R↑d (with d < n) and epsilon > 0, how to find
a nontrivial x = (x←1,...,x←n) in Z↑n of minimal Euclidean norm nu such that
|x←1 a←1 + ... + x←n a←n| < epsilon holds. A weak version of this classical
task (where epsilon and nu may be multiplied by 2↑(cn) ) can be solved in time

                O(n↑2 (d*n/(n-d) * log(1/epsilon))↑(2+o(1))).

The main tool is an improved basis reduction algorithm for integer lattices.

------------------------------

Date: Tue 4 Sep 84 09:27:09-PDT
From: name AAAI-OFFICE <AAAI@SRI-AI.ARPA>
Subject: IJCAI-85 Call for Papers


                                IJCAI-85
                             CALL FOR PAPERS

The IJCAI conferences are the main forum for the presentation of Artificial
Intelligence research to an international audience.  The goal of the IJCAI-85
is to promote scientific interchange, within and between all subfields of AI,
among researchers from all over the world.  The conference is sponsored by the
International Joint Conferences on Artificial Intelligence (IJCAI), Inc., and
co-sponsored by the American Association for Artificial Intelligence (AAAI).
IJCAI-85 will be held at the University of California, Los Angeles from
August 18 through August 24, 1985.

        * Tutorials: August 18-19; Technical Sessions: August 20-24

TOPICS OF INTEREST

Authors are invited to submit papers of substantial, original, and previously
unreported research in any aspect of AI, including:

* AI architectures and languages
* AI and education (including intelligent CAI)
* Automated reasoning (including theorem proving, automatic programming,plan-
  ning, search, problem solving, commensense, and qualitative reasoning)
* Cognitive modelling
* Expert systems
* Knowledge representation
* Learning and knowledge acquisition
* Logic programming
* Natural language (including speech)
* Perception (including visual, auditory, tactile)
* Philosophical foundations
* Robotics
* Social, economic and legal implications


REQUIREMENTS FOR SUBMISSION

Authors should submit 4 complete copies of their paper.  (Hard copy only, no
electronic submissions.)

        * LONG PAPERS: 5500 words maximum, up to 7 proceedings pages
        * SHORT PAPERS: 2200 words maximum, up to 3 proceedings pages

Each paper will be stringently reviewed by experts in the topic area specified.
Acceptance will be based on originality and significance of the reported
research, as well as the quality of its presentation.  Applications clearly
demonstrating the power of established techniques, as well as thoughtful
critiques of previously published material will be considered, provided that
they point the way to new research and are substantive scientific contributions
in their own right.

Short papers are a forum for the presentation of succinct, crisp results.
They are not a safety net for long paper rejections.

In order to ensure appropriate refereeing, authors are requested to
specify in which of the above topic areas the paper belongs, as well
as a set of no more than 5 keywords for further classification within
that topic area.  Because of time constraints, papers requiring major
revisions cannot be accepted.

DETAILS FOR SUBMISSION

The following information must be included with each paper:

        * Author's name, address, telephone number and net address
          (if applicable);
        * Topic area (plus a set of no more than 5 keywords for
          further classification within the topic area.);
        * An abstract of 100-200 words;
        * Paper length (in words).

The time table is as follows:

        * Submission deadline: 7 January 1985 (papers received after
          January 7th will be returned unopened)
        * Notification of Acceptance: 16 March 1985
        * Camera Ready copy due: 16 April 1985

Contact Points

Submissions should be sent to the Program Chair:

        Aravind Joshi
        Dept of Computer and Information Science
        University of Pennsylvania
        Philadelphia, PA 19104 USA

General inquiries should be directed to the General Chair:

        Alan Mackworth
        Dept of Computer Science
        University of British Columbia
        Vancouver, BC, Canada V6T 1W5

Inquiries about program demonstrations (including videotape system
demonstrations) and other local arrangements should be sent to
the Local Arrangements Chair:

        Steve Crocker
        The Aerospace Corporation
        P.O. Box 92957
        Los Angeles, CA 90009 USA

Inquiries about tutorials, exhibits, and registration should be
sent to the AAAI Office:

        Claudia Mazzetti
        American Association for Artificial Intelligence
        445 Burgess Drive
        Menlo Park, CA 94025 USA

------------------------------

End of AIList Digest
********************

∂12-Sep-84  1416	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #115    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Sep 84  14:15:35 PDT
Date: Fri  7 Sep 1984 10:27-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #115
To: AIList@SRI-AI


AIList Digest             Friday, 7 Sep 1984      Volume 2 : Issue 115

Today's Topics:
  LISP - QLAMBDA & Common Lisp,
  Expert Systems - AGE Contact & Expository Writing Assistant,
  Books - Lib of CS and the Handbook of AI,
  AI Tools - Statistical Workstations and Time-Series Lisp,
  Binding - Jim Slagle,
  Speech Recognition - Semantics,
  Philosophy - Induction vs. Deduction & Causality,
  Seminars - A Calculus of Usual Values & Week on Logic and AI
----------------------------------------------------------------------

Date: Thu, 6 Sep 84 8:54:58 EDT
From: "Ferd Brundick (VLD/LTTB)" <fsbrn@BRL-VOC.ARPA>
Subject: QLAMBDA


Does anyone have any information on a new LISP called QLAMBDA ??
It is a "parallel processor" language being developed by McCarthy at
Stanford and is supposed to run on the HEP (Heterogeneous Element
Processor).  Since we have one of the original HEPs, we are interested
in any information regarding QLAMBDA.  Thanks.

                                        dsw, fferd
                                        Fred S. Brundick
                                        USABRL, APG, MD.
                                        <fsbrn@brl-voc>

------------------------------

Date: 13 Aug 84 8:21:00-PDT (Mon)
From: pur-ee!uiucdcsb!nowicki @ Ucb-Vax.arpa
Subject: Re: Common Lisp - (nf)
Article-I.D.: uiucdcsb.5500009

I am also interested in such info. We have Sun-2's running 4.2 and I am
interested in obtaining Common Lisp for them.

-Tony Nowicki
{decvax|inuxc}!pur-ee!uiucdcs!nowicki

------------------------------

Date: Wed 5 Sep 84 17:28:19-CDT
From: Charles Petrie <CS.PETRIE@UTEXAS-20.ARPA>
Subject: AGE

Call Juanita Mullen at (415)497-0474 for a good time in obtaining
Stanford programs such as AGE.  It'll cost you about $500.
CJP

------------------------------

Date: 5 Sep 1984 14:08:13 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: Clarification on the non-existence of the Expository Writing Assistant

I've gotten several inquiries asking for the Expository Writing Assistant
Program that I wished for in a previous message.  Unfortunately, it
doesn't exist.  I'm convinced from studying text generation that we have
ENOUGH TECHNICAL INFORMATION about the structure of text, the functions
of various parts and how parts are arranged that such a program could be
written.  My own writing practise, which now in effect simulates such a
program, indicates that the program's suggestions could be very helpful.

An introduction to the text structures I have in mind was presented at
the 1984 ACL/Coling conference at Stanford in July.  The paper was
entitled "Discourse Structures for Text Generation."

Right now I have no plans to create the assistant.

Sorry, folks.
Bill Mann

------------------------------

Date: 4 Sep 84 16:36:13-PDT (Tue)
From: ihnp4!houxm!vax135!cornell!uw-beaver!ssc-vax!adcock @ Ucb-Vax.arpa
Subject: Re: Lib of CS intro offer: Handbook of AI Vols 1-3 for $5

Please note that the Handbook of AI is a REFERENCE book. It is not
meant to be read from cover to cover.

Also, this is the only books on AI that the Lib of CS sells.

[I disagree with the first point.  The Handbook is also an excellent
tutorial, although it does lack illustrations.  I enjoyed reading it
cover to cover (although I admit to not having finished all three
volumes yet).  The second point is largely true, although they have
offered The Brains of Men and Machines, Machine Perception, LISPcraft,
and a few other related books.  -- KIL]

------------------------------

Date: Fri 7 Sep 84 10:15:02-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Statistical Workstations and Time-Series Lisp (Tisp)

Anyone interested in statistical workstations should look up the
August IEEE Computer Graphics and Applications article
"A Graphical Interface to an Economist's Workstation" by Thomas
Williams of Wagner, Stott and Company, 20 Broad Street, New York,
NY 10005.  He describes a prototype for time-series analysis that
was quickly assembled from standard Interlisp-D functions on the
Xerox 1108.  Apparently the economists of the International
Monetary Fund took to it immediately, and Williams sees no problems
in extending its capabilities to better support them.  His company
is also working on a workstation for professional securities traders.

                                        -- Ken Laws

------------------------------

Date: 5 Sep 1984 13:42-EDT
From: Russ Smith <smith@NRL-AIC>
Subject: Binding - Jim Slagle

As of September 10, 1984 Dr. Slagle will have a new address:

        Professor James R. Slagle
        University of Minnesota
        136 Lind Hall
        207 Church Street, S.E.
        Minneapolis, MN  55455

        (612) 373-7513
        (612) 373-0132

        slagle%umn-cs.csnet@csnet-relay.arpa (possibly...)

------------------------------

Date: 5 Sep 84 10:00:24-PDT (Wed)
From: ihnp4!fortune!polard @ Ucb-Vax.arpa
Subject: Re: Understanding speech versus hearing words
Article-I.D.: fortune.4138

<fowniymz for dh6 layn iyt6r>   [Phonemes for the line-eater. -- KIL]

        Which hip was burned?
        Which ship was burned?
        Which chip was burned?
and     Which Chip was spurned?

all sound the same when spoken at the speed of conversational speech.  This
is evidence that in order to recognize words in continuous speech
you (and presumably a speech-recognition apparatus) need to understand
what the speaker is talking about.
        There seem to be two reasons why understanding is necessary
for word recognition in continuous speech:
        1. The existence of homonyms.  This is why "It's a good read."
sounds the same as: "It's a good reed," and why the two sentences
could not be distinguished without a knowledge of the context.
        2. Sandhi, or sound changes at word boundaries. The sounds at the
 end of a word tend to blend into the sounds at the beginning of the next
 word in conversation,  making words sound as if they ran into each other
and making words sound differently than they would when said in isolation.
        The resulting ambiguities are usually resolved by context.
        Speech rarely occurs without some sort of context, and even then
the first thing that usually happens is to establish a context for what
is to follow.
        To paraphrase Edsgar Dijkstra: "Asking whether computers will
understand speech is like asking whether submarines swim."

--
Henry Polard (You bring the flames - I'll bring the marshmallows.)
{ihnp4,cbosgd,amd}!fortune!polard

------------------------------

Date: Wed 5 Sep 84 10:54:11-PDT
From: BARNARD@SRI-AI.ARPA
Subject: induction vs. deduction

Tony Hasemer's comments on causality and its relationship to inductive
versus deductive logic are very well-taken.  It's time for people in
AI to realize that deduction is quite limited as a mode of reasoning.
Compared to induction, the mathematical foundations of deduction are
well-understood, and deductive systems are relatively easy to
implement on computers.  This no doubt explains its popularity in AI.
The problem arises when one tries to extend the deductive paradigm
from toy problems to real problems, and must confront exceptions,
borderline cases, and, in general, the boggling complexity of the
state space.

While deduction proceeds from the general (axioms) to the specific
(propositions), induction proceeds from the specific to the general.
This seems to be a more natural view of human intelligence.  By
observing events, one recognizes correlations, and infers causality
and other relationships.  To be sure, the inferences may be wrong, but
that's tough.  People make mistakes.  In fact, one of the weaknesses
of deduction is that it does not permit one to draw conclusions that
may be in error (assuming the axioms are correct), but that represent
the best conclusions under the circumstances.

Visual illusions provide good examples.  Have you ever wondered why
you see a Necker Cube as a cube (one of the two reversals), and not as
one of the other infinite number of possiblities?  Perhaps we learn of
cubes through experience (an inductive explanation), but the effect
also occurs with totally unfamiliar figures.  A more general inductive
explanation holds that we see the simplest possible figure (the
Gestalt principle of Pragnanz).  A cube, which has right angles and
equal-length sides, is simpler than any of the other possiblilities.
The concept of "simple" can be made precise: one description is
simpler than another if it can be encoded more economically.  This is
sometimes called the principle of Occam's Razor or the principle of
Minimum Entropy.

        Steve Barnard

------------------------------

Date: 6 Sep 84 07:39 PDT
From: Woody.pasa@XEROX.ARPA
Subject: Causality

Food for thought:
  All the arguments for and against cause and effect and the workings of
Causality have been based around the notion that the cause 'A' of an
effect 'B' are time-related:  we assume that for A to affect B, A must
come before B in our perseption of time.
  But does this have to be the case?  Mathematics (inductive and
deductive logic) are time-independent identities; by assuming that
Causality may be a time-dependent phenomina on the basis of
time-independent arguments is at best wishful thinking.
  What's wrong with event A affecting event B in event A's past?  You
can't go back and shoot your own mother before you were born because you
exist, and obviously you failed.  If we assume the universe is
consistant [and not random chaos], then we must assume inconsistancies
(such as shooting your own mother) will not arise.  It does not,
however, place time constrictions on cause and effect.

    - Bill Woody

Woody.Pasa@XEROX.Arpa   [Until 7 September 1984]
** No net address **    [After 7 September 1984]

------------------------------

Date: Fri, 7 Sep 84 00:35:19 pdt
From: syming%B.CC@Berkeley
Subject: Seminar - A Calculus of Usual Values

  From: chertok@ucbkim (Paula Chertok)
  Subject: Berkeley Cognitive Science Seminar--Sept. 11

                  COGNITIVE SCIENCE PROGRAM

                         Fall 1984

           Cognitive Science Seminar -- IDS 237A


SPEAKER:        L.A. Zadeh
                Computer Science Division, UC Berkeley

TITLE:          Typicality, Prototypicality, Usuality,
                Dispositionality, and Common Sense

          TIME:           Tuesday, September 11, 11 - 12:30pm
          PLACE:          240 Bechtel Engineering Center
          DISCUSSION:     12:30 - 2 in 200 Building T-4


The grouping of the concepts listed in  the  title  of  this
talk is intended to suggest that there is a close connection
between them.  I will describe a general approach  centering
on  the  concept of dispositionality which makes it possible
to formulate fairly precise definitions  of  typicality  and
prototypicality,  and  relate  these concepts to commonsense
reasoning.  These  definitions  are  not  in  the  classical
spirit and are based on the premise that typicality and pro-
totypicality are graded concepts, in the  sense  that  every
object is typical or prototypical to a degree.  In addition,
I will outline what might be  called  a  calculus  of  usual
values.

------------------------------

Date: Thu, 6 Sep 84 16:45:49 edt
From: minker@maryland (Jack Minker)
Subject: WEEK ON LOGIC AND AI


                               WEEK of
            LOGIC and its ROLE in ARTIFICIAL INTELLIGENCE
                                  at
                      THE UNIVERSITY OF MARYLAND
                         OCTOBER 22-26, 1984

The Mathematics and Computer Science Departments at the University
of Maryland at College Park are jointly sponsoring a Special Year in
Mathematical Logic and Theoretical Computer Science.  The week of
October 22-26 will be devoted to Logic and its role in Artificial
Intelligence.  There will be five distinguished lectures as follows:

Monday, October 22: Ray REITER

        "Logic for specification: Databases
        conceptual models, and knowledge representation
        languages"

Tuesday, October 23: John McCARTHY

        "The mathematics of circumscription"

Wednesday, October 24: Maarten VAN EMDEN

        "Strict and lax interpretations of rules in logic programming"

Thursday, October 25: Jon BARWISE

        "Constraint logic"

Friday, October 26: Lawrence HENSCHEN

        "Compiling constraint checking programs in deductive databases"


All lectures will be given at:
        Time: 10:00 AM - 11:30AM

Location: Mathematics Building, Room Y3206

The lectures are open to the public.  If you plan to attend kindly
notify us so that we can make appropriate plans for space.
Limited funds are available to support junior faculty and graduate
students for the entire week or part of the week.  To obtain funds,
please submit an application listing your affiliation and send either
a net message or a letter to:

Jack Minker
Department of Computer Science
University of Maryland
College Park, MD 20742
(301) 454-6119
minker@maryland

------------------------------

End of AIList Digest
********************

∂12-Sep-84  1525	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #116    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Sep 84  15:24:26 PDT
Date: Mon 10 Sep 1984 09:37-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #116
To: AIList@SRI-AI


AIList Digest            Monday, 10 Sep 1984      Volume 2 : Issue 116

Today's Topics:
  AI Tools - FRL in Franz,
  Robotics - Symbolic Programming Query,
  Psychology - Memory Tests,
  Knowledge Representation - OPS5 Problem,
  LISP - Delayed Reply About muLISP,
  Speech Recognition - Syllables,
  Philosophy - Correction,
  Expert Systems - Personal Assistants,
  Seminar - Semantic Modulation
----------------------------------------------------------------------

Date: 5 Sep 84 10:08:15-PDT (Wed)
From: decvax!mcnc!duke!ucf-cs!bethel @ Ucb-Vax.arpa
Subject: Help : need a Full implementation of FRL.
Article-I.D.: ucf-cs.1468


Does anyone have a full implementation of Minsky's FRL, running
under Unix 4.2 and Franz Lisp ? If so would you please respond and
let me know where you are. I would like to get the sources if
they are available and not protected by company/university policy.

Thanks in advance,

Robert C. Bethel

 ...decvax!ucf-cs!bethel or ...duke!ucf-cs!bethel
        bethel.ucf-cs@Rand-Relay

------------------------------

Date: 3 Sep 84 12:35:53-PDT (Mon)
From: hplabs!hao!denelcor!csu-cs!walicki @ Ucb-Vax.arpa
Subject: prolog/lisp/robotics - query
Article-I.D.: csu-cs.2619

I am looking for information on applications of symbolic computing
(lisp, prolog) in the area of robotics.  I do not have any specifics
in mind; I am interested in any (even fuzzy) intersections of the
abovementioned domains.
Please respond by mail, and I will post a summary in net.ai.

Jack Walicki
Colorado State U.
Computer Science Dept.
(Fort Collins, CO 80523)
{hplabs,hao}!csu-cs!walicki

------------------------------

Date: 9 Sep 84 18:11:40 PDT (Sunday)
From: wedekind.es@XEROX.ARPA
Subject: Memory tests

Someone I know is looking for a battery of well-documented,
self-administered memory tests.  Does anyone know of an accessible
source?

thank you,
                Jerry

------------------------------

Date: Saturday,  8-Sep-84 18:35:50-BST
From: O'KEEFE  HPS (on ERCC DEC-10) <okeefe.r.a.@EDXA>
Subject: OPS5 problem


An MSc student came to me with a problem.  He had a pile of OPS5 rules
and was a bit unhappy about the means he had adopted to stop them
looping.  Each rule looked rather like
    (p pain77
        (task ↑name cert)
        (injury ↑name injury6 ↑cert <C>)
        (symptom ↑name symptom9 ↑present yes)
       -(done pain77)
    -->
        (make done pain77)
        (modify 2 ↑cert (compute ....))
    )
There were dozens of them.  The conflict resolution rule of never
firing the same rule on the same data more than once didn't help, as
modify is equivalent to a delete and a make.  What he actually wanted
can be expressed quite neatly in Prolog:

        candidates(BestToWorst) :-
                setof(W/Injury, weight(Injury, W), BestToWorst).

        weight(Injury, MinusCertainty) :-
                prior←certainty(Injury, Prior),
                findall(P, pro(Injury, P), Ps),
                product(Ps, 1.0, P),
                findall(C, con(Injury, C), Cs),
                product(Cs, 1.0, C),
                MinusCertainty is -(1 - P + P*C*Prior).

        pro(Injury, Wt) :-
                evidence←for(Injury, Symptom, Wt),
                present(Symptom).

        con(Injury, Wt) :-
                evidence←against(Injury, Symptom, Wt),
                present(Symptom).

        product([], A, A).
        product([W|Ws], A, R) :-
                B is A*W,
                product(Ws, B, R).

We managed to produce something intermediate between these two, it
used evidence-for and evidence-against tables in working memory, and
had just two hacky rules instead of the scores originally present.
I did spot a way of stopping the loop without using negation, and
that is to make the "certainty" held in the (injury ↑name ↑cert)
WM elements a gensym whose value is the desired number, then as far
as OPS5 is concerned the working memory hasn't been changed.  Of
course that makes patterns that use the number harder to write, and
seems rather hacky itself.

To come to the point, I have two questions about OPS5.
1) Is there a clean way of coding this in OPS5?  Or should I have
   told him to use EXPERT?
2) As I mentioned, we did manage to do considerably better than his
   first attempt.  But the thing that bothered me was that it hadn't
   occurred to him to use the WM for tables.  The course he's in
   uses the Teknowledge(??) "OPS5 Tutorial" (the one with the Wine
   Advisor) and students seem to copy the Wine Advisor more or less
   blindly.  Is there any generally available GOOD course material on
   OPS5, and if so who do we write to?  Are there any moderate-size
   examples available?

------------------------------

Date: 10 May 84 11:33:00-PDT (Thu)
From: hplabs!hp-pcd!hp-dcd!hpfcls!hpbbla!coulter @ Ucb-Vax.arpa
Subject: Delayed Reply About muLISP
Article-I.D.: hpbbla.4900001

It may not be what you are looking for, but there are several LISP
implementations that run on CP/M.  I bought muLISP which is
distributed by MICROSOFT.  It cost $200.  Because of its larger
address space, you should be able to get a more capable LISP for the
IBM/PC, but it will cost more.  The muLISP is fairly complete, although
the only data type is integer (it can represent numbers up to 10**255).
The DOCTOR (a.k.a. ELIZA) program is supplied with it and it runs.

------------------------------

Date: Fri, 7 Sep 84 17:44 EST
From: Kurt Godden <godden%gmr.csnet@csnet-relay.arpa>
Subject: understanding speech, syllables, words, etc.

Which hip was burned?
Which ship was burned?
which chip was burned?
Which Chip was spurned?

First of all, I disagree that all 4 sound 'the same' in conversational
speech, esp. the last.  The final [z] in 'was' gets devoiced because of
the voiceless cluster that follows in 'spurned'.  However, of course I do
agree that often/usually context is necessary to DISAMBIGUATE, tho' not
necessarily to understand in the first place.  Since I am already writing
this I might as well give my originally suppressed comments on the first
person's statement that syllable identification requires understanding.
I definitely do not agree with that claim.  Others have mentioned learning
a foreign language by first tuning the ear to the phonetics of the target
language including that target's syllable types, and this is a point well
taken.  The notion of syllable is certainly different in different lgs,
but apparently can be learned without understanding.
The point is even clearer in one's native language.  We have all heard
Jabberwockish type speech and can clearly recognize the syllables and
phonetic elements as 'English', yet we do so without any understanding.

All this assumes that we know just what a syllable is, which we don't,
but that's another argument and is not really suitable for ailist.
-Kurt Godden <godden.gmr@csnet-relay>

------------------------------

Date: 7 Sep 84 9:13:41-PDT (Fri)
From: ihnp4!houxm!vax135!ariel!norm @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: ariel.751

>
> From:  TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
>
> (Tony Hasemer challenges Norm Andrews' faith about cause and effect)
>
> You say: "logical proof involves implication relationships between
> discrete statements...causality assumes implication relationships
> between discrete events".
>
Hold on here!  I, Norm Andrews, didn't say that!  You are quoting someone
going by the name "Baba ROM DOS" who was attempting to disprove my
statement that "The concept of proof depends upon the concepts of cause
and effect, among other things."  Please don't assign other peoples'
statements to me!

I haven't time now to reply to any other part of your posting...

Norm Andrews

------------------------------

Date: 6 Sep 84 7:14:39-PDT (Thu)
From: decvax!genrad!teddy!mjn @ Ucb-Vax.arpa
Subject: Personal Assistants
Article-I.D.: teddy.391

FINANCIAL ASSISTANT

        I think this would be a good one to add to the list of personal
assistants which would be valuable to have.  It could be a great
aid to budgeting and guiding investments.  It should go beyond
simple bookkeeping and offer advice (when it can).  If conficts
arise in where to spend money, it should be capable of asking
questions to determine what you consider to be more important.

        Additional functionality might include analysis of spending
patterns.  Where does my money go?  Such a question could be
answered by this assistant.  It might include gentle reminders if
you are overspending, or not meeting a payment schedual, or forget
something.

------------------------------

Date: 7 Sep 1984 15:04-EDT
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Semantic Modulation

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

 David McAllester will give the next BBN AI Seminar at 10:30 AM on
Wednesday September 12. The talk is in the 3rd floor large conference room
at 10 Moulton St. Title and abstract follow.

    Semantic Modulation: A New General Purpose Inference Technique

                       David McAllester

             Massachusetts Institute of Technology

        Semantic modulation is a general purpose inference technique
based on the "modulation" of the interpretations of parameters which
appear free in an assertional data base.  A semantic modulation system
includes a finite and fixed set Delta of formulae.  By varying the
interpretation of the free parameters in Delta it is possible to use
the finite and FIXED data base Delta to perform a large set of
inferences which involve reasoning about quantification.  Semantic
modulation is a way of reasoning with quantifiers that does not
involve unification or the standard techniques of universal
instantiation.  Semantic modulation replaces these notions with the
notion of a "binding premise".  A binding premise is a propositional
assumption which constrains the interpretation of one or several free
parameters.

------------------------------

End of AIList Digest
********************

∂12-Sep-84  1650	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #117    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Sep 84  16:49:30 PDT
Date: Wed 12 Sep 1984 10:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #117
To: AIList@SRI-AI


AIList Digest           Wednesday, 12 Sep 1984    Volume 2 : Issue 117

Today's Topics:
  AI Tools - Expert-Ease,
  Expert Systems - Lenat Bibliography,
  Pattern Recognition - Maximal Submatrix Sums,
  Cognition - The Second Self & Dreams,
  Seminars - Computational Theory of Higher Brain Function &
    Distributed Knowledge
----------------------------------------------------------------------

Date: 10 Sep 1984 13:16:20-EDT
From: sde@Mitre-Bedford
Subject: expert-ease

I got a flyer from Expert Systems Inc. offering something called Expert Ease
which is supposed to facilitate producing expert systems. They want $125 for
a demo version, so I thought to inquire if anyone out there can comment on
the thing, especially since the full program is $2000. I'm not eager to buy
a lemon, but if it is a worthwhile product, it might be justifiable as an
experiment.
Thanx in advance,
   David   sde@mitre-bedford

------------------------------

Date: Tue, 11 Sep 84 19:16 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: Lenat

   Please can anyone suggest any good references, articles etc. concerning
Lenat's heuristic inferencing machine I'd be very grateful.

Tony.

[I can suggest the following:

D.B. Lenat, "BEINGS: Knowledge as Interacting Experts,"
Proc. 4th Int. Jnt. Conf. on Artificial Intelligence,
Tblisi, Georgia, USSR, pp. 126-133, 1975.

D.B. Lenat, AM: An Artificial Intelligence Approach to Discovery
in Mathematics as Heuristic Search, Ph.D. Dissertation,
Computer Science Department Report STAN-CS-76-570,
Heuristic Programming Project Report HPP-76-8,
Artificial Intelligence Laboratory Report SAIL AIM-286,
Stanford University, Stanford, California, 1976.

D.B. Lenat, "Automated Theory Formation in Mathematics,"
5th Int. Jnt. Conf. on Artificial Intelligence, Cambridge, pp. 833-42, 1977.

D.B. Lenat and G. Harris, "Designing a Rule System That Searches for
Scientific Discoveries," in D.A. Waterman and F. Hayes-Roth (eds.),
Pattern-Directed Inference Systems, Academic Press, 1978.

D.B. Lenat, "The Ubiquity of Discovery," National Computer Conference,
pp. 241-256, 1978.

D.B. Lenat, "On Automated Scientific Theory Formation: A Case Study Using
the AM Program," in J. Hayes, D. Michie, and L.I. Mikulich (eds.),
Machine Intelligence 9, Halstead Press (a div. of John Wiley & Sons),
New York, pp. 251-283, 1979.

D.B. Lenat, W.R. Sutherland, and J. Gibbons, "Heuristic Search for
New Microcircuit Structures: An Application of Artificial Intelligence,"
The AI Magazine, Vol. 3, No. 3, pp. 17-33, Summer 1982.

D.B. Lenat, "The Nature of Heuristics," The AI Journal, Vol. 9, No. 2,
Fall 1982.

D.B. Lenat, "Learning by Discovery: Three Case Studies in Natural and
Artificial Learning Systems," in Michalski, Mitchell, and Carbonell (eds.),
Machine Learning, Tioga Press, 1982.

D. B. Lenat, Theory Formation by Heuristic Search,
Report HPP-82-25, Heuristic Programming Project, Dept. of
Computer Science and Medicine, Stanford University, Stanford,
California, October 1982.  To appear in The AI Journal, March 1983.

D. B. Lenat, "EURISKO: A Program that Learns New Heuristics and Domain
Concepts," Journal of Artificial Intelligence, March 1983.  Also available
as Report HPP-82-26, Heuristic Programming Project, Dept. of
Computer Science and Medicine, Stanford University, Stanford,
California, October 1982.

                                        -- KIL]

------------------------------

Date: Wed 12 Sep 84 01:50:03-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Pattern Recognition and Computational Complexity

I have a solution to Jon Bentley's Problem 7 in this month's
CACM Programming Pearl's column (September 1984, pp. 865-871).
The problem is to find the maximal response for any rectangular
subwindow in an array of maximum-likelihood detector outputs.
The following algorithm is O(N↑3) for an NxN array.  It requires
working storage of just over half the original array size.

/*
**  maxwdwsum
**
**    Compute the maximum rectangular-window sum in a matrix.
**    Return 0.0 if all array elements are negative.
**
**  COMMENTS
**
**    This algorithm scans the matrix, considering for each
**    element all of the rectangular subwindows with that
**    element as the lower-right corner.  The current best
**    window will either be interior to the previously
**    processed rows or will end on the current row.  The
**    latter possibility is checked by considering the data
**    on the current row added into the best window of each width
**    for each lower-right corner element on the previous row.
**
**    The memory array for tracking maximal window sums could
**    be reduced to a triangular data structure.  An additional
**    triple of values could be carried along with globalmax
**    to record the location and width of the maximal window;
**    saving or recovering the height of the window would be
**    a little more difficult.
**
**  HISTORY
**
**    11-Sep-84  Laws at SRI-AI
**    Wrote initial version.
*/


/* Sample problem. (Answer is 6.0.) */
#define NROWS 4
#define NCOLS 4
float X[NROWS][NCOLS] = {{ 1.,-2., 3.,-1.}, { 2.,-5., 1.,-1.},
    { 3., 1.,-2., 3.}, {-2., 1., 1., 0.}};

/* Macro to return the maximum of two expressions. */
#define MAX(exp1,exp2)  (((exp1) > (exp2)) ? (exp1) : (exp2))


main()
{

  float globalmax;                      /* Global maximum */
  float M[NCOLS][NCOLS];                /* Max window-sum memory,   */
                                        /* (triangular, 1st >= 2nd) */
  int maxrow;                           /* Upper row index */
  int mincol,maxcol;                    /* Column indices */
  float newrowsum;                      /* Sum for new window row */
  float newwdwsum;                      /* Previous best plus new window row */
  float newwdwmax;                      /* New best for this width */
  int nowrow,nowcol;                    /* Loop indices */


  /* Initialize the maxima registers. */
  globalmax = 0.0;
  for (nowrow = 0; nowrow < NCOLS; nowrow++)
    for (nowcol = 0; nowcol <= nowrow; nowcol++)
      M[nowrow][nowcol] = -1.0E20;

  /* Process each lower-right window corner. */
  for (maxrow = 0; maxrow < NROWS; maxrow++)
    for (maxcol = 0; maxcol < NCOLS; maxcol++) {

      /* Increase window width back toward leftmost column. */
      newrowsum = 0.0;
      for (mincol = maxcol; mincol >= 0; mincol--) {

        /* Cumulate the window-row sum. */
        newrowsum += X[maxrow][mincol];

        /* Compute the sum of the old window and new row. */
        newwdwsum = M[maxcol][mincol]+newrowsum;

        /* Update the maximum window sum for this width. */
        newwdwmax = MAX(newrowsum,newwdwsum);
        M[maxcol][mincol] = newwdwmax;

        /* Update the global maximum. */
        globalmax = MAX(globalmax,newwdwmax);
      }
  }

  /* Print the solution, or 0.0 for a negative array. */
  printf("Maximum window sum:  %g\n",globalmax);
}

------------------------------

Date: Sat 8 Sep 84 11:14:04-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: The Second Self

The Second Self by Sherry Turkle is an interesting study of the
relationship between computers and people.  In contrast to most
studies I've seen, this is not a collection of sensationalism from the
newspapers combined with the wilder statements from various
professionals.  Rather, it is (as far as I know) the first thorough
and scientific study of the influence of computers on human thinking
(there's even a boring appendix on methodology, for those who are into
details).

The book starts out with analyses of young children's attitudes towards
intelligent games (Merlin, Speak'n'Spell and others).  Apparently, the children
playing with these games spend a great deal of time discussing whether these
games are actually alive or not, whether they know how to cheat, and so forth.
The games manifest themselves as "psychological machines" rather than the
ordinary physical machines familiar to most children.  As such, they prompt
children to think in terms of mental behavior rather than physical behavior,
which is said to be an important stage in early mental development (dunno myself
if psychologists hold this view generally).

The theme of computers as "psychological machines" is carried throughout the
book.  Older children and adolescents exhibit more of a desire to master the
machine rather than just to interact with it, but interviews with them reveal
that they, too, are aware of the computer as something fundamentally different
from an automobile, in the way that it causes them to think.  Computer
hobbyists of both the first (ca 1978) and later generations are interviewed,
and one of them characterizes the computer as "a tool to think with".

Perhaps the section of most interest to AIList readers is the one in which
Turkle interviews a number of workers in AI.  Although the material has an
MIT slant (since that's where she did her research), and there's an excess
of quotes from Pam McCorduck's Machines Who Think, this is the first time
I've seen a psychological analysis of motives and attitudes behind the
research.  Most interesting was a discussion of "egoless thought" - although
most psychologists (and some philosophers) believe that the existence
of self-consciousness and an ego is a prerequisite to thought and
understanding, there are many workers in AI who do not share this view.
The resolution of this question will have profound effects on many of
the current views in psychology.  Along the same lines, Minsky gave a
list of concepts common in computer science which have no analogies in
psychology (such as the notions of "garbage collection" and "pure procedure").

I recommend this book as an interesting viewpoint on computer science in
general and AI in particular.  The experimental results alone are worth
reading it for.  Hopefully we'll see more studies along these lines in the
future.

                                                               stan shebs

------------------------------

Date: Wed 12 Sep 84 09:58:29-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Dreams

A letter by Donald A. Windsor in the new CACM (September, p. 859) suggests
that the purpose of dreams is to test our cognitive models of the people
around us by simulating their behavior and monitoring for bizarre
patterns.  He claims that the "dream people" are AI programs that
we construct subconsciously.

                                        -- Ken Laws

------------------------------

Date: 09/11/84 13:56:44
From: STORY
Subject: Seminar - Computational Theory of Higher Brain Function

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


TITLE:    ``A Computational Theory of Higher Brain Function''

SPEAKER:  Leslie M. Goldschlager, Visiting Computer Scientist, Stanford
          University

DATE:     Friday, September 14, 1984
TIME:     Refreshments, 3:45pm
          Lecture, 4:00pm
PLACE:    NE43-512a

     A new model of parallel computation is proposed.  The fundamental
item of data in the model is called a "concept", and concepts may be
stored  on  a  two-dimensional   data  structure  called  a   "memory
surface".  The  nature  of the  storage  mechanism and  the  mode  of
communication which is required between storage locations renders  the
model suitable for implementation in VLSI.  An implementation is  also
possible  with  neurons  arranged  in a two-dimensional sheet.  It  is
argued  that the model  is particularly  worthwhile  studying   as  it
captures   some  of  the  computational  characteristics of the brain.

     The memory surface consists of a vast number of processors  which
are called "columns" and  which operate asynchronously in  parallel.
Each processor stores a small amount of information and can be thought
of as a simple finite-state  transducer.  Each processor is  connected
only to those processors within a small radius, or neighbourhood.   As
is usually found with parallel computation, the most important  aspect
of the model is the method of communication between the processors.

     It is  shown in  the  talk  how the  function of  the  individual
processors and the communication  between them supports the  formation
and storage of associations between concepts.  Thus the memory surface
is in effect an associative  memory.  This type of associative  memory
reveals a number of interesting computational features, including  the
ability to store and retrieve sequences of concepts and the ability to
form abstractions from simpler concepts.

     Certain capabilities taken from the realm of human activities are
shown to  be explainable  within the  model of  computation  presented
here.  These include creativity, self, consciousness and free will.  A
theory of sleep is also presented which is consistent with the  model.
In general it is  argued that the  computational model is  appropriate
for describing  and  explaining the  higher  functions of  the  brain.
These are  believed to  occur in  a  region of  the brain  called  the
cortex, and the known anatomy of  the cortex appears to be  consistent
with the memory surface model discussed in this talk.

HOST:   Professor Gary Miller

------------------------------

Date: Mon, 10 Sep 84 17:38:55 PDT
From: Shel Finkelstein <SHEL%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - Distributed Knowledge

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193

  [...]

  Thurs., Sept. 13 Computer Science Seminar
  3:00 P.M.   KNOWLEDGE AND COMMON KNOWLEDGE IN A DISTRIBUTED
  Front Aud.  ENVIRONMENT
            By examining some puzzles and paradoxes, we argue
            that the right way to understand distributed
            protocols is by considering how messages change the
            state of a system.  We present a hierarchy of
            knowledge states that a system may be in, and discuss
            how communication can move the system's state of
            knowledge up the hierarchy.  Of special interest is
            the notion of common knowledge.  Common knowledge is
            an essential state of knowledge for reaching
            agreements and coordinating action.  We show that in
            practical distributed systems, common knowledge is
            not attainable.  We introduce various relaxations of
            common knowledge that are attainable in many cases of
            interest.  We describe in what sense these notions
            are appropriate, and discuss their relationship to
            each other.  We conclude with a discussion of the
            role of knowledge in a distributed system.
            J. Halpern, IBM San Jose Research Lab
            Host:  R. Fagin


  Please note change in directions due to completion of new Monterey
  Road (82) exit replacing the Ford Road exit from 101.  [...]

------------------------------

End of AIList Digest
********************

∂13-Sep-84  2330	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #118    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Sep 84  23:30:07 PDT
Date: Thu 13 Sep 1984 21:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #118
To: AIList@SRI-AI


AIList Digest            Friday, 14 Sep 1984      Volume 2 : Issue 118

Today's Topics:
  AI Tools - MACSYMA Copyright,
  Philosophy - The Nature of Proof,
  Robotics - Brian Reid's Robot Cook,
  Humor - Self-Reference & Seminar on Types in Lunches,
  Journals - Sigart Issue on Applications of AI in Engineering
----------------------------------------------------------------------

Date: 10 September 1984 15:29-EDT
From: Paula A. Vancini <PAULA @ MIT-MC>
Subject: MACSYMA Notice

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

  TO:  ALL MACSYMA USERS
  FROM:  MIT Patent and Copyright Office
  DATE:  August 31, 1984
  SUBJECT:  Recent Notices by Paradigm Associates Regarding MACSYMA Software

Please be advised that the version of MACSYMA designated by Paradigm
Associates in recent messages over this network as "DOE MACSYMA" is a
version of MACSYMA copyrighted to MIT.  "DOE MACSYMA" is an improper
designation.  MIT has delivered a copy of the MIT MACSYMA software to
DOE, pursuant to MIT's contraactual obligations to DOE.

Also be advised that Symbolics, Inc. is the only commercial company
authorized by MIT to perform maintenance services on, or to make
enhancements to, the MIT copyrighted versions of MACSYMA.

MIT hereby disclaims any association with Paradigm Associates and has
not granted Paradigm licensing rights to commercially make use of its
copyrighted versions of the MACSYMA or NIL software.


Queries to Hynes%MIT-XX@MIT-MC

------------------------------

Date: 10 Sep 84 14:33:25-PDT (Mon)
From: decvax!genrad!teddy!rmc @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: teddy.403

        I am not sure I agree that an inductive proof proves any more
or less than a deductive proof.  The basis of induction is to claim
1)  I have applied a predicate to some specific cases within a large
    set (class) of cases.
2)  I detect a pattern in the result of the predicate over those cases
3)  I predict that the results of the predicate will continue following
    the pattern for the rest of the cases in the set.
I state the proof pattern this way to include inductive arguments about
natural world phenomena as well as mathematical induction.

The proof is valid if the accepted community of experts agrees that the
proof is valid (see for example various Wittgenstein and Putname essays
on the foundations of mathematics and logic).  The experts could be
wrong for a variety of reasons.  Natural law could change.  The
argument may be so complicated that everyone gets lost and misses a
mistake (this has even happened before!)  The class of cases may be
poorly chosen.  etc.

        The disagreement seems to be centered around a question of
whether this community of experts accepts causality as part of the
model.  If it is, then we can use causality as an axiom in our proof
systems.  But it still boils down to what the experts accept.

                                        R Mark Chilenskas
                                        decvax!genrad!teddy!rmc

------------------------------

Date: 11 Sep 84 9:27:15-PDT (Tue)
From: ihnp4!houxm!mhuxl!ulysses!allegra!princeton!eosp1!robison @
      Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: eosp1.1106

Mark Chilenskas discussion of inductive proof is not correct for
mathematics, and greatly understates the strength of
mathematical inductive proofs.  These work as follows:

Given a hypothesis;

- Prove that it is true for at least one case.
- Then prove that IF IT IS TRUE FOR A GENERIC CASE,
  IT MUST BE TRUE FOR THE NEXT GENERIC CASE.

For example, in a hypothesis about an expression with regard
to all natural numbers, we might show that it is true if "n=1".
We then show that IF it is true for "n", it is true for "n+1".

By induction we have shown that the hypothesis is absolutely true
for every natural number.  Since true: n=1 => true for n=2,
                                 true: n=2 => true for n=3, etc.

It is the responsibility of the prover to prove that induction
through all generic cases is proper; when it is not, additional
specific cases must be proved, or induction may not apply at all.

Such an inductive proof is absolutely true for the logical system it
is defined in, and just as correct as any deductive proof.
When our perception of the natural laws change, etc., the proof
remains true, but its usefulness may become nil if we perceive
that no system in the real world could possibly correspond to the proof.

In non-mathematical systems, it is possible that both deductive
and inductive proofs will be seriously flawed, and I doubt one
can try to prefer "approximate proofs" of one type over the other.
If a system is not well-enough defined to permit accurate logical
reasoning, then the chances are that an ingenious person can
prove anything (see net.flame and net.religion for examples, also
the congressional record).

        - Toby Robison (not Robinson!)
        allegra!eosp1!robison
        or: decvax!ittvax!eosp1!robison
        or (emergency): princeton!eosp1!robison

------------------------------

Date: Thu 13 Sep 84 09:14:34-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Inductive Proof - The Heap Problem

As an example of improper induction, consider the heap problem.
A "heap" of one speck (e.g., of flour) is definitely a small heap.
If you add one speck to a small heap, you still have a small heap.
Therefore all heaps are small heaps.

                                        -- Ken Laws

------------------------------

Date: Fri 7 Sep 84 09:40:42-CDT
From: Aaron Temin <CS.Temin@UTEXAS-20.ARPA>
Subject: Robot chef bites off too much

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

Our West Coast correspondent has returned with (among other things) an
article from the San Jose Mercury News entitled "Robot cooks if it finds
the beef" about Brian Reid's(TM){1} attempts to program a robot to cook beef
Wellington.   [...]

Aaron
{1}  Brian Reid is a trademark of ScribeInc., Ouagadougou, Bourkina Fasso.


[I have copied the following excerpts from the June 10 article. -- KIL]

                   Robot cooks if it finds the beef
                            by Kathy Holub

  Some professors will do anything for a theoretical exercise.  Brian
K. Reid, a food-loving assistant professor of electrical engineering
at Stanford University, recently tried to simulate teaching a mindless
robot how to cook beef Wellington, using Julia Child's 16-page recipe.
He failed.  Try telling a robot what "spread seasoning evenly" means.
"You have to specify the number of grams (of seasoning) per square
centimeter," he said, with a wry smile.

  It took him 13 hours and 60 pages of computer instructions just to
teach the would-be automaton how to slice and season a slab of beef
and put it safely in the oven.  Julia Child takes only three pages to
explain these simple steps.  "Where I bogged down -- where I gave it
all up and decided to go to bed -- was when I had to tell the robot
how to wrap the beef in pastry," he said.

  But Reid, an excellent cook with a doctorate in computer science,
was thrilled with the experiment, which involved only the computer
program and not an actual robot.  "It was exactly what I wanted," he
said.  "It showed that a cookbook does not tell the whole story, that
there is a lot of information missing from the recipe" that human
cooks provide without knowing it.  The Wellington exercise, he
believes, will help him reach his real goal: to teach a computer how
to make integrated circuits with a computer "recipe" that doesn't
depend on human judgement, memory or common sense.

[...]

  He picked the recipe for his experiment, because it's the longest
one in the book, involving 26 ingredients.  Beef Wellington is a long
piece of tenderlion that is baked twice, the second time in a light
pastry crust that should turn golden brown.  Forget telling the robot
what "golden brown" means.

  "Every time I turned around I discovered massive numbers of
things I was contributing without even thinking about it."
For example, "Julia Child has, 'you slice the beef and season
each piece separately'" before cooking, he said.  "The meat must
be cold or it won't hold its shape, but Julia doesn't tell you
that.  She assumes you know."

  For purposes of simplicity, Reid let the robot skip the slicing of
mushrooms and onions and sauteeing them in butter "until done."
"Cooking until done requires a great deal of knowledge.  A robot
doesn't know that fire [in the pan] isn't part of the process.  It
would happily burn the pan."

  But just telling the robot how to slice the meat, season it,
reassemble it with skewers and put it in the oven was tricky enough --
like teaching a 3-year-old to fix a car.  "You can't just say, 'Cut
into slices,' Reid said.  "You have to say, 'Move knife one centimeter
to the east, cut.'  And that assumes a sub-program telling th robot
what 'cut' means."  You can't tell a robot to slice 'across.'  "Across
what?" said Reid.  "You can't tell a robot to eyeball something.  You
have to tell it to define the center of gravity of the beef, find the
major axis of the beef and cut perpendicular to it."  You also have to
tell the robot how to find the beef, that is, distinguish it from the
other ingredients, and when to stop slicing.  These are standard
problems in robotics.
 
  Other problems are not so standard.  Reid forgot to specify that the
skewers should be removed before the pastry shell is added.  Julia may
be forgiven for leaving this step out, but the robot trainer has
tougher work.

------------------------------

Date: 9 September 1984 04:04-EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Humor in A.I.?

I saw the following button at a science fiction convention:

    Q.  Why did Douglas Hofstadter cross the road?

    A.  To make this riddle possible.

-- Steve

------------------------------

Date: 11 Sep 1984  14:52 EDT (Tue)
From: Walter Hamscher <WALTER%MIT-OZ@MIT-MC.ARPA>
Subject: Humor - Seminar on Types in Lunches

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

              GENERALIZED TYPES IN GRADUATE STUDENT LUNCHES

FIRST MEETING:  Friday, Sept. 14, 1984, 12:00 noon
PLACE:          MIT AI Lab Playroom, 545 Tech. Sq., Cambridge, MA, USA
ORGANIZER:      Walter Hamscher, (walter@oz)

An eating seminar about generalized cold cuts and spread-recognition;
gluttonism, leftovers, and indigestion; related notions appearing
in current and proposed lunches, such as volunteers, menus, and
The Roosevelt Paradox ("There is no such thing as a free lunch")
will be discussed.  The slant will be toward identifying the
underlying digestional problems raised by the desired menu features.
For the first five minutes (during the visit of Prof. Gustav Fleischbrot,
Univ. of Essen) we will present and discuss the papers below starting
with the first two and concluding with the final two:

1. Burger, Chip N., ``The Nutritional Value of Pixels'',
PROC. INT'L. CONF. 5TH GENERATION INGESTION SYSTEMS, Tokyo, to
appear.  Manuscript from Dept. of Computer Science, Univ. of Sandwich, 1984.

2. Burger, Chip N. and Gelly Muffin, ``A Kernel language for abstract
feta cheese and noodles'', SEMANTICS OF FETA CHEESE: PROCEEDINGS, (eds.)
Cream, MacFried and Potstick, Springer-Verlag, Lect. Notes in Comp. Sci.
173, 1-50, 1984.

3. MacDonald, Ronald, ``Noodles for standard ML'', ACM SYMP. ON LINGUICA
AND LINGUINI, 1984.

4. Munchem, J. C., ``Lamb, D-Calories, Noodles, and Ripe Fruit'',
Ph.D. Thesis, MIT, Dept. of EECS, September, 1984.

Meeting time for the first five minutes is Fri. 12:00-12:05, and
Friday 12:00-12:05 thereafter.  Aerobics course credit can be arranged.

------------------------------

Date: Wednesday, 5 September 1984 23:28:30 EDT
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: Special Sigart Issue on Applications of AI in Engineering


                       SPECIAL ISSUE ON APPLICATIONS OF
                               AI IN ENGINEERING

The  April  1985 issue of the SIGART newsletter (tentative schedule) will focus
on the applications of AI in engineering. The  purpose  of  this  issue  is  to
provide  an overview of research being conducted in this area around the world.
The following topics are suggested:

   - Knowledge-based expert systems
   - Intelligent computer tutors
   - Representation of engineering problems
   - Natural language and graphical interfaces
   - Interfacing engineering databases with expert systems

The above topics are by no means exhaustive; other related topics are welcome.

Individuals or groups conducting research in this area and who  would  like  to
share  their  ideas  are invited to send two copies of 3 to 4 page summaries of
their work,  preferably  ongoing  research,  before  December  1,  1984.    The
summaries  should  include  a  title,  the  names of people associated with the
research, affiliations, and bibliographical references.  Since the primary  aim
of  this  special  issue  is  to provide information about ongoing and proposed
research, please be as brief  as  possible  and  avoid  lengthy  implementation
details.    Submissions  should  be  sent  to D. Sriram (or R. Joobbani) at the
following address or through Arpanet to Sriram@CMU-RI-CIVE.

      D. Sriram
      Design Research Center
      Carnegie-Mellon University
      Pittsburgh, PA 15213
      Tel. No. (412)578-3603

------------------------------

End of AIList Digest
********************

∂16-Sep-84  1655	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #119    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Sep 84  16:54:00 PDT
Date: Sun 16 Sep 1984 15:47-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #119
To: AIList@SRI-AI


AIList Digest            Sunday, 16 Sep 1984      Volume 2 : Issue 119

Today's Topics:
  LISP - VAX Lisps & CP/M Lisp,
  Philosophy - Syllogism Correction,
  Scientific Method - Induction vs. Deduction,
  Course - Logic Programming,
  Conference - Database Systems
----------------------------------------------------------------------

Date: Sun, 16 Sep 84 14:28 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: Lisp on the VAX

  We have a VAX 11/750 with four Mb of memory, running NIL.  We also have
four Lisp hackers of several years' standing who are likely to write
quite substantial programs.  We have to decide whether to buy some extra
memory, or to spend the money on Golden Common Lisp, which someone told
us is much more effiecient than NIL.

  Can anyone please advise us? Thank you.

   Tony.

------------------------------

Date: 11 Sep 84 17:36:37-PDT (Tue)
From: hplabs!sdcrdcf!sdcsvax!stan @ Ucb-Vax.arpa
Subject: Lisp under CP/M
Article-I.D.: sdcsvax.52

I recently purchased a copy of ProCode's Waltz Lisp for the Z80 and CP/M
and found it to be a very good imitation of Franz Lisp.

I downloaded some rather substantial programs I'ld written over the
past two years and within 20 minutes had them up and running on my
Kaypro.  Surprisingly, there was little speed degradation unless there
was a major amount of computations involved.

All that was required (for my programs) were a few support routines to
implement defun, terpri, etc.

The manual is very complete and well written.  (For example, it had examples
of how to write defun in it.)

Cost was just under $100.00, and well worth it.

Now, if only my Kaypro could handle background processes like the VAX...

    Stan Tomlinson

------------------------------

Date: 11 Sep 84 11:06:09-PDT (Tue)
From: hplabs!hao!seismo!rochester!rocksvax!rocksanne!sunybcs!gloria!colonel
      @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: gloria.535

>>           All swans are white.
>>           This is a swan.
>>           Therefore it is white.
>>
>>      Notice that the conclusion (3rd sentence) is only true iff the two
>>      premises (sentences 2 and 3) are true.

A minor correction:  "iff" does not belong here.  The premises do not follow
from the conclusion.
--
Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 14 Sep 84 09:01 PDT
From: Feuerman.pasa@XEROX.ARPA
Subject: Re:  Inductive Proof - The Heap Problem

At the risk of getting involved.....

One thing bothers me about the inductive proof that all heaps are small.
I will claim that that is NOT an inductive proof after all.  The second
requirement for a (mathematical) proof by induction states that one must
show that P(n) implies P(n+1).  I see nothing in the fact that one
"speck" is small that NECESSARILY implies that two "specks" constitutes
a small heap.  One seems to conclude the fact that a two-speck heap is
small from some sort of outside judgment of size.  Thus, Small(1 Speck)
does NOT imply Small(2 Specks), something else implies that.

Lest we get into an argument about the fact that large for one could be
small for another, I'll bring up another mathematical point:  The
Archimedian Principle.  It basically says that given any number (size,
number of specks, what have you), one can ALWAYS find a natural number
that is greater.  Applying that to the heap problem, given anyone's
threshold of what constitutes a large heap and what constitutes a small
heap, one can ALWAYS make a large heap out of a small heap by adding one
speck at a time.  I'll further note that one need not make that
transition between small and large heaps a discreet number; as long as
you can put a number on some sense of a large heap (regardless of
whether that is the smallest large heap), you can always exceed it.  For
example, I will arbitrarily say that 10**47 specks in a heap makes it
large.  I don't have to say that 10**47 - 1 is small.  Yet we will still
be able to create a large heap (eventually).

Now, anyone interested in speculating about what happens if someone's
size function is not constant, but varies with time, mood, money in the
bank, etc.?

As further proof of my Archimedian Principle, we will note that I have
just in fact turned a small heap/argument (Ken Laws' four line Heap
Problem) into a large one (this message).

--Ken <Feuerman.pasa@Xerox.arpa>

------------------------------

Date: Fri 14 Sep 84 14:30:14-PDT
From: BARNARD@SRI-AI.ARPA
Subject: induction vs. deduction

The discussion of induction vs. deduction has taken a curious turn.
Normally, when we speak of induction, we don't mean *mathematical
induction*, which is a formally adequate proof technique.  We mean
instead the inductive mode of reasoning, which is quite different.
Inductive reasoning can never be equated to deductive reasoning
because it begins with totally different premises.  Inductive
reasoning involves two principles:

(1) The principle of insufficient reason, which holds that in the
absence of other information, the expectation over an ensemble of
possibilities is uniform (heads and tails are equally probable).

(2) The principle of Occam's razor, which hold that given a variety of
theories about some data, the one that is "simplest" is preferred.
(We prefer the Copernican model of the solar system to the Ptolemaic
one, even though they both account for the astronomical data.)

The relationship of time, causality, and induction has been
investigated by the Nobel Laureate, Ilya Prigogine.  The laws of
classical physics, with one exception, are neutral with respect to the
direction of time.  The exception is the Second Law of Thermodynamics,
which states that the entropy of a closed systems must increase, or
equivalently, that a closed system will tend toward more and more
disordered states.  For a long time, physicists tried to prove the
Second Law in terms of Newtonian principles, but with no success.
Eventually, Boltzman and Gibbs explained the Second Law
satisfactorily by using inductive principles to show that the
probability of a system entering a disordered, high-entropy state is
far higher than the converse.  Prigogine proposes that random,
microscopic events cause macroscopic events to unfold in a
fundamentally unpredictable way.  He extends thermodynamics to open
systems, and particularly to "dissipative systems" that, through
entropy exchange, evolve toward or maintain orderly, low-entropy
states.

Inductive reasoning is also closely connected with information theory.
Recall that Shannon uses entropy as the measure of information.
Brillouin, Carnap, and Jaynes have shown that these two meanings of
entropy (information in a message and disorder of a physical system)
are equivalent.

Steve Barnard

------------------------------

Date: Wed 12 Sep 84 21:16:28-EDT
From: Michael J. Beckerle <BECKERLE@MIT-XX.ARPA>
Subject: Course Offering - Logic Programming

           [Forwarded from the MIT bboard by Laws@SRI-AI.]


              TECHNOLOGY OF LOGIC PROGRAMMING

                          CS  270
                         Fall  1984

              Professor Henryk Jan Komorowski
                     Harvard University
                 Aiken Computation Lab. 105
                          495-5973

Meeting:  Mondays, Wednesdays - 12:30 to 2 PM, Pierce Hall 209

This year the course will focus on presenting basic concepts
of  logic programming by deriving them from logic.  We shall
study definite clause programs:

    - What they specify (the least Herbrand model).
    - How they are used: a logical view  of  the  notion  of
      query.
    - What computations of logic programs are:  the  resolu-
      tion principle, SLD-refutability, completeness and nega-
      tion by failure.

This general background will serve as a basis to introduce a
logic  programming language Prolog and will be associated by
a number of assignments to master specification programming.
It  will  be followed by some implementation issues like in-
terpreting, compiling, debugging and other programmer's sup-
port  tools.   We shall then critically investigate a number
of applications of Prolog to  software  specification,  com-
piler writing, expert system programming, embedded languages
implementation, database  programming,  program  transforma-
tions,  etc.,  and  study  language's power and limitations.
The course will end with a  comparison  of  definite  clause
programming  to  other  formalisms,  eg. attribute grammars,
functional programming, rule based programming.   Time  per-
mitting parallelism, complexity and other topics of interest
will be studied.

REQUIREMENTS A background in propositional logic, some fami-
liarity  with  predicate  calculus and general background in
computer science (reasonable acquaintance with parsing, com-
piling, databases, programming im recursive languages, etc.)
is expected.

WORKLOAD
    - one problem set on logic.
    - Two sets of Prolog assignments.
    - Mid-term mid-size Prolog single person project.
    - A substantial amount of papers to  read:  core  papers
      and  elected one-topic papers (the latter to be reviewed
      in sections).
    - Final research paper  on  individually  elected  topic
      (with instructor's consent).

LITERATURE, REQUIRED

PROGRAMMING IN PROLOG by Clocksin and Mellish.  RESEARCH PA-
PERS distributed in class.


LITERATURE, OPTIONAL

LOGIC FOR PROBLEM SOLVING, by Kowalski  MICRO-PROLOG:  LOGIC
PROGRAMMING, by Clark and McCabe LOGIC AND DATABASES, edited
by Gallaire and Minker IMPLEMENTATIONS OF PROLOG, edited  by
Campbell


                       TENTATIVE PLAN
                        25 meetings

- Introduction: declarative and imperative programming,  the
goals of Vth Generation Project.

- Informal notions of: model, truth, provability.  The  syn-
tax  of predicate calculus, proof systems for predicate cal-
culus completemess, soundness, models.

- Transformation to clausal form,  resolution  and  its com-
pleteness.

- Definite clause programs:

        * operational semantics
        * proof-theoretic semantics
        * fixed point semantics

- Introduction to programming in Prolog.
- Data structures.
- Negation by failure and cut.
- Specification programming methodology.
- Advanced Prolog programming.
- Algorithmic debugging.
- Parsing and compiling in Prolog.
- Abstract data type specification in Prolog.
- Logic  programming  and  attribute  grammars,  data  flow
  analysis.
- Interpretation and compilation of logic programs

- Artificial intelligence applications:

        * metalevel programming
        * expert systems programming
        * Natural language processing

- Alternatives to Prolog;  breadth-first  search,  coroutines,
  LOGLISP, AND- and OR-parallelism.
- Concurrent Prolog.
- Relations between LP and functional programming.
- LP and term rewriting.
- Program transformation and derivation.
- Object oriented programming.
- Some complexity issures.
- LP and databases.
- Architecture for LP.

------------------------------

Date: Wed, 12 Sep 84 10:40:23 pdt
From: Jeff Ullman <ullman@diablo>
Subject: Conference - Database Systems

                      CALL FOR PAPERS

        FOURTH ANNUAL ACM SIGACT/SIGMOD SYMPOSIUM ON
               PRINCIPLES OF DATABASE SYSTEMS

             Portland, Oregon March 25-27, 1985


The conference will  cover  new  developments  in  both  the
theoretical  and  practical  aspects  of  database  systems.
Papers  are  solicited  that  describe  original  and  novel
research into the theory, design, or implementation of data-
base systems.

     Some suggested but not  exclusive  topics  of  interest
are:  application of AI techniques to database systems, con-
currency control, database and database scheme design,  data
models,  data  structures  for physical database implementa-
tion,  dependency  theory,  distributed  database   systems,
logic-based  query languages and other applications of logic
to database systems, office automation  theory,  performance
evaluation  of database systems, query language optimization
and implementation, and security of database systems.

     You are invited  to  submit  9  copies  of  a  detailed
abstract (not a complete paper) to the program chairman:

                  Jeffrey D. Ullman
                  Dept. of Computer Science
                  Stanford University
                  Stanford, CA 94305

Submissions will be evaluated on the basis of  significance,
originality,  and  overall quality.  Each abstract should 1)
contain enough information  for  the  program  committee  to
identify  the  main contribution of the work; 2) explain the
importance of the work, its novelty, and  its  relevance  to
the  theory  and/or  practice  of  database  management;  3)
include comparisons with and references to relevant  litera-
ture.   Abstracts  should be no longer than 10 typed double-
spaced pages (12,000 bytes of source text).  Deviations from
these  guidelines may affect the program committee's evalua-
tion of the paper.

                     Program Committee

             Jim Gray            Richard Hull
             Frank Manola        Stott Parker
             Avi Silberschatz    Jeff Ullman
             Moshe Vardi         Peter Weinberger
             Harry Wong

The deadline for submission  of  abstracts  is  October  12,
1984.   Authors  will be notified of acceptance or rejection
by December 7, 1984.  The accepted papers, typed on  special
forms or typeset camera-ready in the reduced-size model page
format, will be due at the  above  address  by  January  11,
1985.   All  authors  of accepted papers will be expected to
sign copyright release forms.  Proceedings will  be  distri-
buted at the conference and will be available for subsequent
purchase through ACM.  The proceedings  of  this  conference
will  not  be  widely disseminated.  As such, publication of
papers in this record will not, of itself, inhibit  republi-
cation in ACM's refereed publications.


        General Chairman:      Local Arrangements Chairman:
        Seymour Ginsburg       David Maier
        Dept. of CS            Dept. of CS
        USC                    Oregon Graduate Center
        Los Angeles, CA 90007  19600 N. W. Walker Rd.
                               Beaverton, OR 97006

------------------------------

End of AIList Digest
********************

∂19-Sep-84  1045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #120    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Sep 84  10:45:30 PDT
Date: Wed 19 Sep 1984 09:29-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #120
To: AIList@SRI-AI


AIList Digest           Wednesday, 19 Sep 1984    Volume 2 : Issue 120

Today's Topics:
  AI Tools - Micro Production Systems,
  Professional Societies - AI SIG in San Diego,
  Books - Publisher Info for The Second Self,
  Scientific Method - Swans & Induction,
  AI and Society - CPSR,
  Robotics - Kitchen Robots,
  Pattern Recognition - Maximum Window Sum,
  Course - Decision Systems,
  Games - Computer Chess Championship
----------------------------------------------------------------------

Date: 18 September 1984 1053-EDT
From: Peter Pirolli at CMU-CS-A
Subject: micro production systems

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

A friend of mine is looking for a production system language (however simple)
that runs on an Apple (preferably) or any other micro.  He basically wants to
use the system to give some hands-on experience to fellow faculty members
at a small university where main-frame resources are too scarce to run a
full-blown production system.  Any pointers to micro-based systems would be
greatly appreciated.

Send mail to pirolli@cmpsya or pirolli@cmua.

------------------------------

Date: 17 Sep 84 07:16 PDT
From: Tom Perrine <tom@LOGICON.ARPA>
Subject: AI SIG in San Diego

I have an off-net friend who is interested in starting (or finding) a
Special Interest Group for AI in San Diego.  It would appear that if
ACM or IEEE knows about such a group, "they ain't talking." Is there
anyone else in S.D.  who would be interested in such a group?  Please
reply to me, not the Digest, of course.

Please include name, address and a daytime phone.

Thanks,
Tom Perrine
Logicon - OSD
San Diego, CA

------------------------------

Date: Mon 17 Sep 84 12:41:04-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Publisher Info for The Second Self

I neglected to provide the details...

The publisher is Simon & Schuster, ISBN is 0-671-46848-0, and
LC number is QA76.T85 1984 (or something like that).  The book is available
in quite a few bookstores, including the big chains, so try there first.

                                                        stan shebs

------------------------------

Date: 17 Sep 1984 08:47:02-EDT
From: sde@Mitre-Bedford
Subject: Swans:

At least during the latter part of March, 1980, the statement,
"all swans are white," was false; those familiar with Heinlein's
"fair witness" concept will recognize the phrasing; I say it
having witnessed black or near-black swans in Perth during the
aforementioned time.
Granting that the facts have little to do with the principle of
the argument, I thought folks might nonetheless be amused.
   David   sde@mitre-bedford

------------------------------

Date: 12 Sep 84 9:11:34-PDT (Wed)
From: hplabs!tektronix!bennety @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: tektroni.3588

Toby Robison's comment on Mark Chilenska's discussion on inductive
proof was quite apt -- however, we should note that induction is
limited to statements on a countably infinite set.  That is, induction
can only work with integers.

-bsy
 tektronix!bennety

------------------------------

Date: Mon, 17 Sep 84 11:01:17 PDT
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Uhrig's Stream of Consciousness in V2 #112


With regard to Werner's concern about unethical or immoral applications of
AI: Computer Professionals for Social Responsibility (CPSR) is very concerned
with this issue as am I.

Please give me feedback on this.  Perhaps the surest death-knell for the
outrageous-dangerous stuff ("Intelligent Computers" that would make life-or-
death decisions for the human race) is to require that they pass rigorous
tests.  If it is required that they actually work the way they are supposed to
many of the systems will die a natural (and deserved) death.  A comprehensive
and rigorous top-down (parallel with the top-down design) testing program may
be the answer.

  --Charlie

------------------------------

Date: 17 Sep 84 09:20:20 PDT (Monday)
From: Hoffman.es@XEROX.ARPA
Subject: AI in the kitchen, continued

The article in V2,#118, "Robot cooks if it finds the beef", reminded me
of the following:

"....
John McCarthy, one of the founders of the field of artificial
intelligence, is fond of talking of the day when we'll have 'kitchen
robots' to do chores for us, such as fixing a lovely shrimp creole.
Such a robot would, in his view, be exploitable like a slave because it
would not be conscious in the slightest.  To me, this is
incomprehensible.  Anything that could get along in the unpredictable
kitchen world would be as worthy of being considered conscious as would
a robot that could survive for a week in the Rockies.  To me, both
worlds are incredibly subtle and potentially surprise-filled.  Yet I
suspect that McCarthy thinks of a kitchen as ... some sort of simple and
'closed' world, in contrast to 'open-ended' worlds, such as the Rockies.
This is just another example, in my opinion, of vastly under-estimating
the complexity of a world we take for granted, and thus under-estimating
the complexity of the beings that could get along in such a world.
Ultimately, the only way to be convinced of these kinds of things is to
try to write a computer program to get along in a kitchen...."

Excerpted from a letter by DOUG HOFSTADTER in 'Visible Language',
V17,#4, Autumn 1983.  (In 1983, that periodical carried, in successive
issues, an extensive piece by Knuth on his Meta-Font, a lengthy review
by Hofstadter, and letters from both of them and from others.)

--Rodney Hoffman

------------------------------

Date: 14 Sep 1984 16:36-EDT
From: Dan Hoey <hoey@NRL-AIC>
Subject: Maximum window sum, in AIList V2 #117

Ken,

    Bentley's problem 7 asks for the complexity of the maximum
subarray sum problem.  I would advise you to call your algorithm a
solution to the maximum subarray sum problem, rather than a solution to
problem 7.  You have given an upper bound for the complexity, but
barring an equal lower bound problem 7 is still unsolved.  I know of
no lower bound larger than the size of the input.

    In case you're interested, here's another maximum subarray sum
algorithm with the same time complexity, using less working storage.
See the comments for a description of its working.  Enjoy.

Dan


[The following is simpler, more efficient, and uses less auxilliary
storage than the version I gave (although it does require buffering
the full input array).  I can't think of any improvement.  -- KIL]

/*
**  maxsbasum
**
**    Compute the maximum subarray sum in an array. In case all
**    array elements are negative, the maximum sum is 0.0
**    for an empty subarray.
**
**  COMMENTS
**
**    Every subarray of an array is a full-height subarray of a
**    full-width subarray of the array.
**
**    This routine examines each of the O(NROWS↑2) full-width
**    subarrays of the array.  A vector containing the sum of each
**    column in the full-width subarray is maintained.  The maximum
**    full-height subarray sum of the full-width subarray corresponds
**    to the maximum subvector sum of the vector of column sums,
**    found in O(NCOLS) time using Kadane's algorithm.
**
**    Running time is O(NROWS↑2 NCOLS).  Working storage for this
**    program is dominated by the O(NCOLS) vector of column sums.
**
**  HISTORY
**
**    16-Sep-84  Laws at SRI-AI
**    Merged innermost two loops into one.
**
**    14-Sep-84  Hoey at NRL-AIC
**    Cobbled this version together.
**    Comm. ACM, September 1984; Jon Bentley
**    published maximum subvector code (Pascal).
**    Algorithm attributed to Jay Kadane, 1977.
**
**    11-Sep-84  Laws at SRI-AI
**    Wrote another program solving the same problem.  Parts of
**    his program, from AIList V2 #117, appear in this program.
*/


/* Sample problem. (Answer is 6.0.) */
#define NROWS 4
#define NCOLS 4
float X[NROWS][NCOLS] = {{ 1.,-2., 3.,-1.}, { 2.,-5., 1.,-1.},
    { 3., 1.,-2., 3.}, {-2., 1., 1., 0.}};

/* Macro to return the maximum of two expressions. */
#define MAX(exp1,exp2)  (((exp1) > (exp2)) ? (exp1) : (exp2))

main()
{
  float MaxSoFar;               /* Global maximum */
  float ColSum[NCOLS];          /* Column sums of full-width subarray */
  float MaxEndingHere;          /* For Kadane's algorithm */
  int lowrow,highrow;           /* Bounds of full-width subarray */
  int thiscol;                  /* Column index */

  /* Loop over bottom row of full-width subarray. */
  MaxSoFar = 0.0;
  for (lowrow = 0; lowrow < NROWS; lowrow++) {

    /* Initialize column sums. */
    for (thiscol = 0; thiscol < NCOLS; thiscol++)
      ColSum[thiscol] = 0.0;

    /* Loop over top row of full-width subarray. */
    for (highrow = lowrow; highrow < NROWS; highrow++) {

      /* Update column sum, find maximum subvector sum of ColSum. */
      MaxEndingHere = 0.0;
      for (thiscol = 0; thiscol < NCOLS; thiscol++) {
        ColSum[thiscol] += X[highrow][thiscol];
        MaxEndingHere = MAX(0.0, MaxEndingHere + ColSum[thiscol]);
        MaxSoFar = MAX(MaxSoFar, MaxEndingHere);
      }
    }
  }

  /* Print the solution. */
  printf("Maximum subarray sum:  %g\n",MaxSoFar);

}

------------------------------

Date: Tue 18 Sep 84 15:09:58-PDT
From: Samuel Holtzman <HOLTZMAN@SUMEX-AIM.ARPA>
Subject: Course - Decision Systems

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                         Course Announcement

            DECISION ANALYSIS AND ARTIFICIAL INTELLIGENCE


                   Engineering Economic Systems 234
                               3 units
                     Instructor: Samuel Holtzman

                 Monday and Wednesday 2:00 to 3:15 pm
                        Building 260, room 264

This course investigates the relationship between decision analysis
and artificial intelligence in building expert systems for decision
making in complex domains.  Major topic areas include fundamentals of
artificial intelligence (production systems, search, logic
programming) and design of intelligent decision systems based on
decision analysis (use of formal methods in decision making,
represention and solution of decision problems, reasoning under
uncertainty).  The course will also cover programming in Lisp for
students not familiar with the language.  Course requirements include
a sustantial project based on the concepts developed in the course.

Prerequesites:  EES 231 (Decision Analysis) or equivalent
                and familiarity with computer programming.

For further information contact:

                Samuel Holtzman
                497-0486, Terman 301
                HOLTZMAN@SUMEX

------------------------------

Date: Mon Sep 17 17:15:08 1984
From: mclure@sri-prism
Subject: Games - Computer Chess Championship

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

    The ACM annual North American Computer Chess Championship is a
watering-hole for computer chess researchers, devotees, and ordinary
chess players interested in what new improvements have been made in
computer chess during the past year.

        Come see Ken Thompson and Belle seek out chess
        truth, chess justice, and the American Way!

        Watch David Levy wince as his chess program
        discovers innovations in chess theory unknown even
        to Grandmasters!

        Marvel at Bob Hyatt's Cray Blitz program as it
        slices through the opposition at many MIPS!

        See the tiny Spracklen program otherwise marketed as
        Prestige and Elite by Fidelity tally up points
        against the "big boys!"

        Gawk as ivory tower researchers such as Tony
        Marsland of University of Alberta try to turn
        obscure and obfuscating computer chess theory into
        tangible points against opposition!

        Watch in amazement as David Slate's NUCHESS program,
        a descendent of the famous Northwestern University
        Chess 4.5 program, tries to become the most
        "human-like" of chess programs!

        And strangest of all, see a chess tournament where the
        noise level is immaterial to the quality of play!

The following information is from AChen at Xerox...

        1) dates - 7-9 Oct, 1984
        2) where - Continental Parlors at San Francisco Hilton
        3) times - Sun 1300 and 1900, 7 Oct, 1984
                   Mon 1900, 8 Oct, 1984
                   Tue 1900, 9 Oct, 1984
        4) who   - Tournament director will be Mike Valvo
                   four round Swiss-style includes Cray BLITZ,
                   BELLE and NUCHESS.

        for more information:
                Professor M. Newborn
                School of Computer Science, McGill University
                805 Sherbrooke Street West, Montreal
                Quebec, Canada H3A 2K6

note: this info can be found in July, 1984 issue of ACM Communications,
page A21.

        Stuart

------------------------------

End of AIList Digest
********************

∂19-Sep-84  2307	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #121    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Sep 84  23:06:58 PDT
Date: Wed 19 Sep 1984 21:48-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #121
To: AIList@SRI-AI


AIList Digest           Thursday, 20 Sep 1984     Volume 2 : Issue 121

Today's Topics:
  Machine Translation - Aymara as Intermediate Language,
  Logic - Induction & Deduction,
  Linguistics - Pittsburghese,
  Expert Systems & Humor - Excuse Generation
----------------------------------------------------------------------

Date: 11 Sep 84 17:07:00-PDT (Tue)
From: pur-ee!uiucdcs!uokvax!emjej @ Ucb-Vax.arpa
Subject: Aymara as intermediate language?
Article-I.D.: uokvax.900011

Yesterday in a local paper a news item appeared (probably AP or UPI)
telling about a fellow in South America (Ecuador? Peru, perhaps?),
named Ivan Guzman de Rojas, who seems to be having respectable
success using a S. American Indian language, that of the Aymara
Indians, as an intermediate language for machine translation of
natural languages. The article seemed to indicate that Aymara is
something of a pre-Columbian Loglan, near as I could tell. Any
references to the literature concerning this would be greatly
appreciated. (Send mail, I'll summarize to the net after a seemly
interval.)

                                        James Jones

                                uucp: ...!ctvax!uokvax!emjej
                                or    ...!{ctvax,mtxinu}!ea!jejones

------------------------------

Date: Wed, 19 Sep 84 14:10:13 pdt
From: Stanley Lanning <lanning@lll-crg.ARPA>
Subject: Nitpicking...

  "... induction is limited to statements on a countably infinite set."

Well, that depends how you define induction.  If you define it in
the right way, all you need in a well-ordered set.  Cardinality
doesn't enter into it.

Concerning the argument "All A are B, x is an A, therefore x is a B".
It is not true that the conclusion is true only if the two assumptions
are true.  It is not even true that the argument is valid only if the
assumptions are true.  What is true is that we are guarenteed that
the conclusion is true only if the assumptions are true.

Thanks for your indulgence.
                                                        -smL

------------------------------

Date: 17 September 1984 1419-EDT
From: Lee Brownston at CMU-CS-A
Subject: Pittsburghese figured out

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

The way Pittsburghers talk is a sure source of amusement for newcomers
to this area.  Most attention is devoted to diction, especially to the
idioms.  Although the latter are no more nor less illogical than any
other idioms, they are easily identified and likely to be unfamiliar.
Over the past couple years, I've been trying to figure out the system
of phonology.  I'm still working on the suprasegmentals, but I have
some preliminary results on vowels and consonants that may be of some
interest.

As far as I can tell, the only consonantal departure from Standard
American English is that the final 'g' is omitted from the present
progressive to the extent that the terminal sound is the alveolar nasal
rather than the palatal nasal continuant.  This pronunciation is of
course hardly unique to Pittsburgh.

The vowels are much more interesting.  The 'ow' sound is pronounced 'ah',
as in 'dahntahn'.  Confusion between, say, "down" and "don" is avoided
since the 'ah' sound has already vacated: it is pronounced 'aw', as in
'Bawb signed the cawntract'.  Similarly, 'aw' has gone to the greener
pastures of 'or', as in 'needs worshed'.  It appears that the chain ends
here.  As its discoverer, I shall call this phonological game of musical
chairs "the great Pittsburgh vowel movement."

------------------------------

Date: Mon 17 Sep 84 23:38:41-CDT
From: David Throop <LRC.Throop@UTEXAS-20.ARPA>
Subject: Humor - Excuse Generation


TOWARDS THE AUTOMATIC GENERATION OF EXCUSES
by David Throop

The Great and Pressing Need for Excuses.
  There is a huge need in industry for excuses.  A recent  marketing survey
shows that the biggest need in air transport is not for a service to get it
there overnight, but for one that takes the blame for it being three weeks
late.  Every time there is a dockworkers' strike anywhere in the world, titans
of commerce who are completely unaffected get on the phone.  They then explain
to all their customers that every order that is overdue is sitting on that
dock, and that they couldn't help it.  Then they grin.  Because they've got a
good excuse.  And even the smallest industrial project needs a raft of
excuses by the time it finishes.
  Computers have already helped with this need.  Many problems that used to be
blamed on the postal service, on the railroads and on telegraph operators are
now routinely blamed on computers.  "Your check is in the mail" has now been
supplemented by "Our computer has been down, and we'll send you your money as
soon as the repairman fixes it."  Whenever a bridge collapses, specialized
teams of system analysts are called in, in order to quickly blame the whole
mess on a computer.
  But computers can do more than this.  Computers have a place to play in the
generation of excuses; actually coming up with the lies and evasions that keep
our economy running.

The Structure of Excuses
  There is a great size range in excuses.  Many small excuses can be generated
without any AI or other advanced techniques.  And there will always be some
really big FUBARS that will need humans to come up with appropriate excuses.
But in between there is the somewhat stereotyped snafu that can be framed in
some structure and has different excuse elements as slots.  These are the
half-assed excuses, the most fruitful field for knowledge engineering.

Where It Came From
  It has been noted repeatedly in work on computer vision that a subject often
does not have all of the necessary information to justify an observation, but
that he makes it anyway and supplies some "excuse" to explain why some
features are missing.  The classic illustration of this problem is in
envisioning a chair: the subject may only be able to see three of the legs
but assumes a 4-legged chair.  Indeed, Dr. Minsky presented such a chair at the
AAAI in August.
  We interview the chair itself after the lecture, and asked it why it came
with only three legs.  The resulting string of excuses was impressive, and
more robust than one might expect from a broken piece of furniture.
  These included:
      "I'm not registered with the local chairs' union, so they'd only let me
        up on stage if I took off one of my legs.
      "Accounting cut my travel allowance by 18%, so I had to leave my leg
        back in California.
      "This is just a demo chair that we put together for the conference.  We
        have a programming team on the West coast that will have implemented
        another leg by October.
      "My secretary talked to somebody on the program committee who assured
        her that I wouldn't have to bring my own legs, and that there would be
        plenty of legs here in Austin.  Then I go here and found they were
        overbooked.
      "I felt that three legs was adequate to demonstrate the soundness of the
        general leg concept, and actually implementing a fourth leg would have
        been superfluous."

  This underlined a central observation: making excuses is critical to
perception, and is central to intelligence.  I mean, think about.  Sounding
intelligent involves making gross generalizations & venting primitive
prejudices and then making plausible excuses for them when they collide with
reality.  Any imaginable robot that understands the consequences of its action
will want to weasel out of them.

     The 3 legged chair problem yielded a high number of palatable excuses.
This toy problem shows the feasibility of generating large numbers of
industrial-strength excuses.  This goal would free humans from having to
justify their actions, leaving them more time to spend on screwing things
up.  That, after all, seems to be what they are best at.

How It Works
  A user makes request via SNIVEL (Stop-Nagging,-I'm-Verifying-an-Excuse
Language), a user-friendly system that nods, clucks sympatheticly, encourages
the user to vent his hostility & frustration, and has a large supply of
sympathetic stock answers for lame excuses:
  "You poor dear, I know you were trying as hard as you could.
  "Well, you can't be blamed for trusting them.
  "I can certainly see how you couldn't get your regular work done after an
     emotional shock like that."

  The program then begins to formulate an excuse appropriate to the problem.
Many problems can be recognized trivially and have stock excuses.  These can
be stored in a hash table and supplied without any search at all:
  "The dog vomited on it, so I threw it out.
  "It's in the mail.
  "I thought you LIKED it when I did that.
  "Six debates would probably bore the public.
  "I have a headache tonight.
  "I trusted in the advice of my accountant/lawyer/broker/good-time mama."

  If the problem is more complex, SNIVEL enters into a dialog with the user.
Even if he wants to take responsiblity for his share of the problem, SNIVEL
solicits the user, getting him to blame other people and explain why it wan't
REALLY his fault.  A report may be late getting to a client, for instance; it
may ask what last minute changes the client had requested, and what kinds of
problems the user had with a typing pool.  SNIVEL shares records with the
personnel file, so that it can quickly provide a list of co-workers' absences
that problably slowed the whole process down.  It has a parsing alogrithm that
takes the original work order and comes with hundreds of different parses for
each sentence, demonstrating that the original order was ambiguous and caused
a lot of wasted effort.
  One of the central discoveries of AI has been that problems that look easy
are often very hard.  Proving this rigorously is a powerful tool: it provides
the excuse that almost any interesting problem is too hard to solve.  So of
course we're late with the report.

Theoretical Issues
  Not all the work here has focused on immediate payoffs.  We
have studied several theoretical issues involved with excuses.  We've found
that all problems can be partitioned into:
   1) Already Solved Problems for which excuses are not needed.
   2) Unsolved Problems
   3) Somebody Else's Problem
  We concentrate on (2).  We've shown that this class is further dividable.
Of particular interest is the class of unsolved problems for which the set of
palatable excuses is infinite.  These problems never need to actually be
solved.  We can generate research proposals, programs and funds requests
indefinitely without ever having to produce any results.  We just compute the
next excuse in the series and go on.

Remaining problems
  It is easiest to generate excuses when the person receiving the excuse is
either a complete moron or really couldn't care less about the whole project.
Fortunately, this is often the case and can be the default assumption.  But is
often useful to model the receiver of the excuse.  We can than calulate
just how big a whopper he's likely to swallow.
  It is, of course, not necessary that the receiver believe the excuse, just
that he accepts it.  The system is not ready yet able to model why anyone
would accept the excuse "Honestly, we're just friends, there's nothing
between us at all."  But our research shows that most people accept this
excuse, and almost no one believes it.

  The system still has problems understanding different points of view.  For
instance, it cannot differentiate why

  "My neighbors were up drinking and fighting and doing drugs and screaming
all night, so I didn't get any sleep at all,"

 is a reasonable excuse for being late to work, but

  "I was up drinking and fighting and doing drugs and screaming all night, so
I didn't get any sleep at all," is not.

  Finally, the machine is handicapped by its looks.  No matter how
brilliantly it calculates a good excuse, it can't sweep back a head of
chestnut hair, fix a winning smile on its face, and say with heartfelt warmth,
"Oh, thank you SO much for understanding..."  And that is so much of the soul
of a truly good excuse.

------------------------------

End of AIList Digest
********************

∂21-Sep-84  0034	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #122    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Sep 84  00:34:30 PDT
Date: Thu 20 Sep 1984 23:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #122
To: AIList@SRI-AI


AIList Digest            Friday, 21 Sep 1984      Volume 2 : Issue 122

Today's Topics:
  AI Tools - Production Systems on Micros,
  Logic - Deduction & Induction,
  Project - Traffic Information System,
  Seminar - Common Sense Thinking,
  Seminar Series - Theories of Information & NCARAI Series
----------------------------------------------------------------------

Date: Thu 20 Sep 84 11:13:30-CDT
From: CMP.BARC@UTEXAS-20.ARPA
Subject: Production Systems on Apple

The only thing I have seen for Apple is micro-PROLOG + APES (Augmented PROLOG
for Expert Systems), marketed in the U.S. by Programming Logic Systems, Inc.,
31 Crescent Drive, Milford, CT 06460 (203-877-7988).  I have no experience with
the system, but the brochures I have seen and the price make it attractive.
Micro-PROLOG runs on Apple II with a Z80 card and full l.c. keyboard, on the
IBM PC and PC Jr., and on various configurations of Obsborne, Kaypro II,
HP 150, TRS 2000, Xerox 820, among others.  CP/M 80 systems require at least
48K RAM, while PC/MS DOS needs 128K.  APES reportedly runs on any system which
supports micro-PROLOG, but the order form lists only PC/MS DOS and CP/M 86
versions (for Apricot, Sirius and IBM PC compatible).  APES requires a minimum
memory configuration of 128K.  In today's inflated market, the license fees of
$295 each or $495 for both are not too outrageous.  Clark and McCabe's book is
included.

The only other systems I've heard about are Expert-Ease and M.1 for the IBM PC
and TI's Personal Consultant for their Professional Computer.  These go for
$2000, $12,500 and $3000 each.  The literature and reviews of Expert-Ease make
it look like a joke (a friendly interface to a table), but neither media has
been able to give an example of the system's inductive capabilities.  Expert-
Ease appears to be able to form rules from examples, but the people writing the
brochures and reviews don't seem to be able to understand or convey this.
saw M.1 and the Personal Consultant demoed at AAAI.  Both are Emycin clones,
minus a lot of the frills (and thus, perhaps, minus the bugs).  The Personal
Consultant seemed more impressive.  It is supposedly written in IQLISP, but
does not appear to transport to non-TI computers running IQLISP.  All of these
products seem way overpriced, as university research has made them fairly
simple engineering projects.  In the case of the Personal Consultant, none of
the academics who did the research seem connected with the project.  I imagine
that Teknowledge (M.1) has some of Emycin's designers on staff, and know that
Michie is involved with Expert-Ease.

Dallas Webster (CMP.BARC@UTexas-20)

------------------------------

Date: 15 Sep 84 18:16:54-PDT (Sat)
From: decvax!mcnc!akgua!psuvax1!simon @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: psuvax1.1140

   ....induction (in mathematics) can deal only with integers.

(approximate quote). So what else do you expect a formal system to deal with?
The only reasonable answer would be "small finite sets (that are equivalent to
subsets of integers). Sure, there are non-denumerable sets that are interesting
- but only to sufficiently abstract mathematicians. I do not see useful computer
systems worrying about large cardinals, determinacy or the continuum.
janos simon

------------------------------

Date: 20 Sep 84 17:30-PDT
From: mclure @ Sri-Unix.arpa
Subject: deduction vs. induction

The recent claim in AILIST that

        'deduction proceeds from the general (axioms) to
         the specific (propositions), induction proceeds from
         the specific to the general.'

is not correct.

A lucid definition and comparision of both can be found in:

    LOGIC AND CONTEMPORARY RHETORIC by Kahane

        Stuart

------------------------------

Date: Wed, 19 Sep 84 23:01:24 BST
From: "Dr. A. Sloman" <XASV02%svpa@ucl-cs.arpa>
Subject: Project - Traffic Information System

                       [Edited by Laws@SRI-AI.]


     An Intelligent Collator and Condenser of Traffic Information

The Cognitive Studies Programme, at Sussex University, UK, now has an
AI/Natural Language project to build a traffic information system.
The project is concerned with a system which processes and integrates
reports from the police about traffic accidents. It must also make decisions
about which motorists are to be informed about these accidents, by means
of broadcasts over an (eventually) nationwide cellular radio network.
A significant part of the project will involve investigating to what
extent unrestricted natural language input can be handled, and how the obvious
problems of unexpected and ungrammatical input can be overcome. It will also
be necessary to encode rules about intelligent broadcasting strategies for
traffic information.  A dedicated workstation (probably SUN-2/120)
will be provided for the project, as well as access to network
facilities and other computing facilities at Sussex University (mostly
VAX-based).

For information about the project, and/or about the large and growing AI
group at Sussex University, please contact Chris Mellish, Arts Building E,
University of Sussex, BRIGHTON BN1 9QN, England. Phone (0273)606755 -
if Chris is not in ask for Alison Mudd.
(Contact via netmail is not convenient at present.)

Aaron Sloman

------------------------------

Date: Wed, 19 Sep 84 15:49:20 pdt
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Seminar - Common Sense Thinking

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, September 25, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

   SPEAKER:        John McCarthy, Computer Science  Department,
                   Stanford University

   TITLE:          What is common sense thinking?

   ABSTRACT:       Common sense  thinking  includes  a  certain
                   collection  of knowledge and certain reason-
                   ing  ability.   Expert  knowledge  including
                   scientific knowledge fits into the framework
                   provided  by  common  sense.   Common  sense
                   knowledge  includes  facts  about the conse-
                   quences  of  actions  in  the  physical  and
                   psychological  worlds,  facts about the pro-
                   perties of space, time, causality and physi-
                   cal  and  social objects.  Common sense rea-
                   soning includes both logical  deductive  and
                   various  kinds  of  non-monotonic reasoning.
                   Much common sense knowledge is  not  readily
                   expressible  in  words, and much that can be
                   usually isn't.

                   The lecture will attempt  to  survey  common
                   sense  knowledge and common sense reasoning.
                   It will be oriented  toward  expressing  the
                   knowledge in languages of mathematical logic
                   and expressing the  reasoning  as  deduction
                   plus formal non-monotonic reasoning.

------------------------------

Date: Wed 19 Sep 84 19:55:18-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar Series - Theories of Information

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

    PROJECT ACTIVITIES FOR PROJECT F-1:  THEORIES OF INFORMATION

The notions information and of informational content are central to much
of the work done at CSLI and are emerging as central notions in philosophy,
computer science, and other disciplines.  Thus we need mathematically
precise and philosophically cogent accounts of information and the forms
it takes. The F-1 project will hold a series of meetings on various CSLI
researchers' approach to the notion of information.  The emphasis will be
on gaining a detailed understanding of the theories that are being developed
and discussing issues in ways that will be helpful in making further
progress.  Those interested should attend the meetings regularly to help
develop a working group with a shared body of knowledge.  For this reason,
we will not make it a practice to announce individual meetings, which will
occur approximately bi-weekly, Tuesdays at 3:15, in the Ventura Seminar
Room.  The first meeting will be on October 2, when Jon Barwise will speak
for a bit about the nature and prospects for a theory of information,
followed by Fernando Pereira and/or Stan Rosenschein who will talk about
the current state of situated automata theory.

                                                        ---John Perry

------------------------------

Date: 20 Sep 84 15:26:51 EDT
From: Dennis Perzanowski <dennisp@NRL-AIC.ARPA>
Subject: Seminar Series - Fall AI Seminar Schedule at NCARAI

                       U.S. Navy Center for Applied Research
                             in Artificial Intelligence
                       Naval Research Laboratory - Code 7510
                             Washington, DC   20375-5000


                                FALL SEMINAR SERIES


        Monday,
        24 Sept. 1984   Professor Hanan Samet
                        Computer Science Department
                        University of Maryland
                        College Park, MD
                                "Overview of Quadtree Research"

        Monday,
        15 Oct. 1984    Professor Stefan Feyock
                        Computer Science Department
                        College of William and Mary
                        Williamsburg, VA
                                "Syntax Programming"

        Monday,
        22 Oct. 1984    Professor Andrew P. Sage
                        Computer Science Department
                        George Mason University
                        Fairfax, VA
                                "Alternative Representations
                                 of Imprecise Knowledge"

         Monday,
         5 Nov. 1984    Professor Edwina Rissland
                        Department of Computer and Information Sciences
                        University of Massachusetts
                        Amherst, MA
                                "Example-based Argumentation and Explanation"

        Monday,
        19 Nov. 1984    Mr. Kurt Schmucker
                        National Security Agency
                        Office of Computer Science Research
                        Ft. Meade, MD
                                "Fuzzy Risk Analysis: Theory and Implication"



   The above schedule is a partial listing of seminars to be offered this
   year.  When future dates and speakers are confirmed, another mailing
   will be sent to you.

   Our meetings are usually held on the first and third Monday mornings
   of each month at 10:00 a.m. in the Conference Room of the Navy Center
   for Applied Research in Artificial Intelligence (Bldg. 256) located on
   Bolling Air Force Base, off I-295, in the South East quadrant of
   Washington, DC.  A map can be mailed for your convenience.  Please
   note that not all seminars are held on the first and third Mondays this
   fall due to conflicting holidays.

   Coffee will be available starting at 9:45 a.m. for a nominal fee.

   IF YOU ARE INTERESTED IN ATTENDING A SEMINAR, PLEASE CONTACT US BEFORE
   NOON ON THE FRIDAY PRIOR TO THE SEMINAR SO THAT A VISITOR'S PASS WILL
   BE AVAILABLE FOR YOU ON THE DAY OF THE SEMINAR.  NON-U.S. CITIZENS
   MUST CONTACT US AT LEAST TWO WEEKS PRIOR TO A SCHEDULED SEMINAR.
   If you would like to speak, be added to our mailing list, or would
   like more information, contact Dennis Perzanowski.  [...]

   ARPANET: DENNISP@NRL-AIC or (202) 767-2686.


------------------------------

End of AIList Digest
********************

∂23-Sep-84  1304	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #123    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 23 Sep 84  13:04:18 PDT
Date: Sun 23 Sep 1984 10:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #123
To: AIList@SRI-AI


AIList Digest            Sunday, 23 Sep 1984      Volume 2 : Issue 123

Today's Topics:
  AI Tools - OPS5,
  Expert Systems - Computer Program Usage Consultant,
  Literature - Introductory Books & IEEE Computer Articles,
  LISP - VMS LISPS,
  Logic - Induction and Deduction & Causality,
  Humor - Slimy Logic Seminar,
  Seminar - Analysis of Knowledge,
  Course & Conference - Stanford Logic Meeting
----------------------------------------------------------------------

Date: 21 Sep 84 13:24:47 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Info needed on OPS5


Any information on compilers/interpreters for the OPS5 language on VAXen
will be appreciated. I'm particularly interested in relatively short
reviews and/or introductions to the language; a tutorial would be nice.
If any of this stuff is available online I'd like to FTP it.
        Thanx in advance.
                Biesel@rutgers.arpa

------------------------------

Date: 17 Sep 84 15:28:05-PDT (Mon)
From: hplabs!tektronix!uw-beaver!ssc-vax!alcmist @ Ucb-Vax.arpa
Subject: Computer Program Usage Consultants?
Article-I.D.: ssc-vax.99

I am working on an expert system to advise users setting up
runs of a complex aerodynamics program.  The project is sort of like
SACON, only we're trying to do more.

Does anyone know of work in progress that I should know about?  I
am interested in any work being done on

        1. Helping users set up appropriate inputs for a
        sophisticated analytical or simulation program,
        2. Diagnosing problems with the output of such a program,
        or
        3. Interpreting large volumes of numerical output in
        a knowledgeable fashion.

I am looking for current work that people are willing to talk about.
Pointers to literature will be appreciated, even though our library
is doing a literature search.

Please reply by mail!  I will send a summary of responses to anybody
who wants one.

Fred Wamsley
Boeing Computer Services AI Center
UUCP:     {decvax,ihnp4,sdcsvax,tektronix}!uw-beaver!ssc-vax!alcmist
ARPA:     ssc-vax!alcmist@uw-beaver.ARPA

------------------------------

Date: 15 Sep 84 10:38:00-PDT (Sat)
From: pur-ee!uiucdcs!convex!graham @ Ucb-Vax.arpa
Subject: "introductory" book on AI??
Article-I.D.: convex.45200003

I would like to learn more about the AI field.  I am almost "illiterate" now.
I have a PhD in CS from Illinois and 26 years experience in system software
such as compilers, assemblers, link-editors, loaders, etc...  Can anyone cite
a good book  or books for the AI field which
        is comprehensive
        is tutorial, in the sense that it includes the motivation behind
                the avenues in AI that it describes, and
        includes a good bibliography to other works in the field?

[Previous AIList discussion on this subject seems to have found Winston's
new "Artificial Intelligence" and Elaine Rich's "Artificial Intelligence"
to be good textbooks.  The three-volume Handbook of AI is also excellent.
Older texts by Nils Nilsson and by Bertram Raphael ("The Thinking Computer")
still have much to offer.  Other recent books cover LISP, PROLOG, and AI
programming techniques, as well as expert systems and AI as a business.
-- KIL]

------------------------------

Date: Fri 21 Sep 84 10:08:00-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Knowledge Engineering Article

The September issue of IEEE Computer is devoted to AI systems, with
emphasis on the man-machine interface.  It's well worth reading.

Frederick Hayes-Roth's article seems to be an excellent introduction
to knowledge engineering.  (The title is "The Knowledge-Based Expert
System: A Tutorial," but it is not really an expert-systems overview.)
The article by Elaine Rich on natural-language interfaces is also
excellent.  There are other articles on smart databases, tutoring
systems, job-shop control, and decision support systems.

There is also an article on a declarative parameter-specification
system for Schlumberger's Crystal system.  I found the article hard
to follow, and I have strong doubts about the desirability of building
a domain-independent parameter parser, then using procedural attachment
in the parameter declarations to hack in runtime dependencies and
domain-specific intelligent behavior.  Even if this is to be done,
the base program should have the option of requesting parameters only
as (and if) they are needed, and should be able to create or alter the
declarative structures dynamically at the time the parameters are
requested.  Given such a system, the declarative structures are simple
a convenient way of passing control options to the user-query
subroutine.  Most of the procedual knowledge belong in the procedural
code, not in declarative structures in a separate knowledge base.

                                        -- Ken Laws

------------------------------

Date: Sat, 22 Sep 84 14:48:59 EDT
From: Gregory Parkinson <Parkinson@YALE.ARPA>
Subject: VMS LISPS

We run Yale's T on VMS and like it a lot.  According to our benchmarks
it runs (on the average) a little faster than DEC's Common Lisp.  The
T compiler gets rid of tail recursion which speeds things up a bit, and
is about 40 times faster when dealing with labels.  Subjectively, working
with CL after working with T feels like driving a 76 Caddie Eldorado (power
windows, seats, brakes, steering, etc.) after getting used to a Honda CRX.
They both get you where you're going, but there's something about the
Honda that makes you feel like you're really driving......

                                          Greg Parkinson
                                          Cognitive Systems, inc.

------------------------------

Date: 21 Sep 84 3:42:03-EDT (Fri)
From: hplabs!hao!seismo!mcvax!vu44!tjalk!dick @ Ucb-Vax.arpa
Subject: Proof by induction, fun & entertainment
Article-I.D.: tjalk.338

Claim: All elements of an array A[1..n] are equal to its first element.
Proof by induction:
        Starting case: n = 1.
                Proof:
                        Obvious, since A[1] = A[1].
        Induction step:
                If the Claim is true for n = N, it is true for n = N + 1.
                Proof:
                        All elements of A[1..N] are equal (premise), and since
                        A[2..N+1] is an array of length N, all its elements
                        are equal also.  A[N] is in both (sub-)arrays, so
                                A[1] = A[N] and
                                A[N] = A[N+1]   ->
                                        A[1] = A[N+1]
                        which makes all of A[1..N+1] equal.
                End of proof of induction step
        The starting case and the induction step together prove the Claim.
End of proof by induction

                Courtesy of             Dick Grune
                                        Vrije Universiteit
                                        Amsterdam
                                        the Netherlands



[ *** Spoiler ***     The flaw, of course, is in the statement that
"A[N] is in both (sub-)arrays".  (I point this out to avoid a flood of
mail supplying the answer.)  -- KIL]

------------------------------

Date: Fri, 21 Sep 84 08:36 CDT
From: Boebert@HI-MULTICS.ARPA
Subject: More on induction and deduction

More on induction and deduction, along with much other interesting and
entertaining discussion, can be found in

Proofs and Refutations
by Imre Lakatos
Cambridge

------------------------------

Date: Fri 21 Sep 84 10:32:39-PDT
From: BARNARD@SRI-AI.ARPA
Subject: induction vs. deduction

In reply to the claim that my statement

        'deduction proceeds from the general (axioms) to
         the specific (propositions), induction proceeds from
         the specific to the general.'

is not correct (according to Kahane, LOGIC AND CONTEMPORARY RHETORIC),
see Aristotle, BASIC WORKS OF ARISTOTLE, ed. by R. McKeon, Random
House, 1941.

------------------------------

Date: 18 Sep 84 5:54:04-PDT (Tue)
From: hplabs!hao!seismo!umcp-cs!chris @ Ucb-Vax.arpa
Subject: Re: Causality
Article-I.D.: umcp-cs.16

(Apply :-) to entire reply)

>  What's wrong with event A affecting event B in event A's past?  You
>can't go back and shoot your own mother before you were born because you
>exist, and obviously you failed.  If we assume the universe is
>consistant [and not random chaos], then we must assume inconsistancies
>(such as shooting your own mother) will not arise.  It does not,
>however, place time constrictions on cause and effect.

Who says you can't even do that?  Perhaps your existence is actually
just a probablility function.  If P(existence) becomes small enough
you'll just disappear.  Maybe that explains all those mysterious
disappearances (``He just walked around the horses a moment ago...'')

In-Real-Life: Chris Torek, Univ of MD Comp Sci (301) 454-7690
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!chris
CSNet:  chris@umcp-cs           ARPA:   chris@maryland

------------------------------

Date: 17 Sep 84 18:21:16-PDT (Mon)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: wdl1.424

     Having spent some years working on automatic theorem proving and
program verification, I am occasionally distressed to see the ways in which
the AI community uses (and abuses) formal logic.  Always bear in mind that
for a deductive system to generate only true statements, the axioms of the
system must not imply a contradiction; in other words, it must be impossible
to deduce TRUE = FALSE.  In a system with a contradiction, any statement,
however meaningless, can be generated by deductive means.
     It is difficult to ensure the soundness of one's axioms.  See Boyer
and Moore's ``A Computational Logic'' for a description of a logic for which
soundness can be demonstrated and a program which generates inductive proofs
based on that logic.  The Boyer and Moore approach works only for mathematical
objects constructed in a specific and rigorous manner.  It is not applicable
to ``real world reasoning.''
     There are schemes such as nonmonotonic reasoning which attempt to deal
with contradictions.  These are not logical systems but heuristic systems.
Some risk of incorrect results is accepted in exchange for the ability to
``reason'' with non-rigorous data.  A clear distinction should be made between
mathematical deduction in rigorous spaces and heuristic problem solving by
semi-logical means.

                                John Nagle

------------------------------

Date: 20 Sep 1984  10:44 EDT (Thu)
From: Walter Hamscher <WALTER%MIT-OZ@MIT-MC.ARPA>
Subject: Humor & Seminar - Slimy Logic

     [Forwarded from the MIT bboard by SASW@MIT-MC.]


       The Computer Aided Conceptual Art Laboratory
                            and
           Laboratory for Graduate Student Lunch
                          presents

                         SLIMY LOGIC
                              or
       INDENUMERABLY MANY TRUTH-VALUED LOGIC WITHOUT HAIR

                         by Lofty Zofty


The indenumerably many-valued logics which result from the first stage
of slime-ification are so to speak "non-standard" logics; but slimy logic,
the result of the second stage of slime-ification, is a very radical
departure indeed from classical logics, and thereby sidesteps many
fruitless preoccupations of logicians such as completeness, consistency,
axiomatization, and proof.  In this talk I attempt to counter Slimy Logic's
low and ever-declining popularity by presenting a "qualitative" view
of slimy logic in which such definitions as
                        2
        very true = true
and                                  -3/2
        not very pretty false = false

by the qualitative (i.e. so even people who don't carry
around two calculators can understand them) definitions:

        very true = true
and
        not very pretty false = ugly false

I will then use this "qualitative" slimy logic to very nearly prove
very much that Jon Doyle is probably not very right about nearly
extremely many things.

HOSTS: Robert Granville and Isaac Kohane
Refreshments will be served
Moved to the Third Floor Theory Group Playroom

------------------------------

Date: 20 September 1984 13:30-EDT
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: Seminar - Analysis of Knowledge

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

DATE:     Wednesday, September 26, 1984
TIME:     Refreshments, 3:45pm
          Lecture, 4:00pm
PLACE:    NE43-453
TITLE:    ``A MODEL-THEORETIC ANALYSIS OF KNOWLEDGE''
SPEAKER:  Dr. Joseph Y. Halpern, IBM, San Jose

Understanding knowledge is a fundamental issue in many disciplines.  In
computer science, knowledge arises not only in the obvious contexts (such as
knowledge-based systems), but also in distributed systems (where the goal is to
have each processor know something, as in Byzantine agreement).  A general
semantic model of knowledge is introduced, to allow reasoning about statements
such as "He knows that I know whether or not she knows whether or not it is
raining."  This approach more naturally models a state of knowledge than
previous proposals (including Kripke structures).  Using this notion of model,
a model theory for knowledge is developed.  This theory enables one to
interpret such notions as a "finite amount of information" and "common
knowledge" in different contexts.  This is joint work with Ron Fagin and Moshe
Vardi.

HOST:    Professor Silvio Micali

------------------------------

Date: Mon 17 Sep 84 09:01:21-PDT
From: Jon Barwise <BARWISE@SU-CSLI.ARPA>
Subject: Course & Conference - Stanford Logic Meeting

           Logic, Language and Computation Meeting

The Association for Symbolic Logic (ASL) and the Center for the  Study
of Language  and Information  (CSLI) are  planning a  two-week  summer
school and  meeting, July  8-20, 1985,  at Stanford  University.   The
first week (July  8-13) will  consist of  a CSLI  Summer School,  with
courses on various topics, including PROLOG, LISP, Complexity  Theory,
Denotational Semantics,  Generalized Quantifiers,  Intensional  Logic,
and Situation Semantics.  The second week (July 15-20) will be an  ASL
meeting  with  invited  lectures  (in  Logic,  Natural  Language,  and
Computation), symposia (on "Logic in Artificial Intelligence",  "Types
in the  Study  of  Computer  and  Natural  Languages",  and  "Possible
Worlds"), and  sessions  for  contributed  papers.   Those  interested
should contact Ingrid Deiwiks, CSLI, Ventura Hall, Stanford, CA  94305
(ph 415-497-3084) before November 1, with an indication as to  whether
they would like to make a reservation for a single or shared room  and
board in  a residence  hall, and  for  what period  of time.   A  more
detailed program will be available in November.  The program committee
consists of Jon  Barwise, Solomon Feferman,  David Israel and  William
Marsh.

------------------------------

End of AIList Digest
********************

∂23-Sep-84  2339	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #124    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 23 Sep 84  23:38:49 PDT
Date: Sun 23 Sep 1984 21:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #124
To: AIList@SRI-AI


AIList Digest            Monday, 24 Sep 1984      Volume 2 : Issue 124

Today's Topics:
  Algorithms - Demonstration Idea Wanted,
  Machine Translation - SIGART Special Issue,
  Natural Language - A Generalized Phrase Structured Grammar in Prolog,
  Expert Systems & Logic Programming - Kastner's Preference Rules in Prolog
----------------------------------------------------------------------

Date: 23 Sep 84 19:30:30 PDT (Sun)
From: Mike Brzustowicz <mab@aids-unix>
Subject: Demonstration Idea wanted

A non-network friend of mine needs to demonstrate to a class the importance
of detailed specifications.  He has been trying to find a task which is easy
to do but hard to describe, so that half of the class can write descriptions
which the other half will follow literally and thereby fail to accomplish
the described task.  Anyone have any ideas other than tying shoelaces or
cooking beef wellington?  (Many people don't wear laced shoes and the
facilities available aren't up to cooking :-)).  Thanks!

-Mike
<mab@aids-unix>

------------------------------

Date: Thu, 20 Sep 84 20:41 EST
From: Sergei Nirenburg <nirenburg%umass-cs.csnet@csnet-relay.arpa>
Subject: SIGART Special Section on Machine Translation


                 ACM SIGART SPECIAL SECTION
         ON MACHINE TRANSLATION AND RELATED TOPICS

     A special section on MT and related work is planned for
an early 1985 issue of the SIGART Newsletter.

     The purpose of the section is:

     1.  To update the knowledge of the new paradigms in  MT
         in the AI community

     2.  To help MT workers to learn about  developments  in
         AI that can be useful for them in their projects

     3.  To  provide   the   MT   community   with   updated
         information  about  current,  recent and nascent MT
         projects

     4.  To  help  identify  major  topics,   results   and,
         especially, directions for future research.

Contributions are solicited from MT workers, as well as  all
workers   in  AI,  theoretical,  computational  and  applied
linguistics and other related fields  who  feel  that  their
work  has  a bearing on MT (machine-aided human translation;
automatic dictionary  management;   parsing  and  generating
natural  language;  knowledge representation for specialized
domains;  discourse analysis;  sublanguages  and  subworlds,
etc., etc.)

     A detailed questionnaire to help  you  in  preparing  a
response is available from the guest editor,

Sergei Nirenburg
Department of Computer Science
Colgate University
Hamilton NY 13346 USA
(315) 824-1000 ext. 586
nirenburg@umass

     If  you  know  of  people  interested   in   MT-related
activities  who are not on a net, please let them know about
this call.

     The deadline for submissions is DECEMBER 1, 1984.
     Electronic submissions are welcome

------------------------------

Date: Thursday, 13-Sep-84 18:49:25-BST
From: O'Keefe HPS (on ERCC DEC-10)
Subject: Availability of a GPSG system in Prolog

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

This message is composed of extracts from the ProGram manual.

ProGram is a suite of Prolog programs that are intended to permit
the design, evaluation, and debugging of computer realizations of
phrase structure grammars for large fragments of natural languages.
The grammar representation language employed is that known as GPSG
(Generalized Phrase Structure Grammar).  A GPSG grammar, as far as
ProGTram is concerned, has up to nine components as follows:

        1. Specification of feature syntax.
        2. Immediate dominance rules (ID rules).
        3. Metarules which operation on the ID rules.
        4. Linear precedence rules (LP rules).
        5. Feature coefficient default values.
        6. Feature co-occurrence restrictions.
        7. Feature aliasing data.
        8. Root admissibility conditions.
        9. A lexicon.

All the major conventions described in the GPSG literature are
implemented, including the Head Feature Convention, the Foot
Feature Principle (and hence slash categories &c), the Control
Agreement Principle, the Conjunct Realisation Principle, lexical
subcategorisation  and rule instantiation incorporating the notion
of privilege.

All the major parts of the grammar interpreter code are written
in standard Prolog (Clocksin&Mellish).  Installation of the
system should be fairly simple on any machine of moderate size
which supports Prolog.

                             AVAILABILITY

1.  The manual is "University of Sussex Cognitive Science Research
    Paper 35 (CSRP 035) and can be ordered from Judith Dennison,
    Congitive Studies Programme, Arts E, University of Sussex,
    Falmer, Brighton BN1 9QN, for 7.50 pounds including postage.
2.  ProGram is aprt of the standard Sussex POPLOG system and is
    included, without extra charge, in all academic issues and
    updates of the POPLOG system.  POPLOG is available to UK
    academic users for the sum of 500 pounds (special arrangements
    apply to holders of SERC AI grants who have a VAX running UNIX).
    Existing UK academic POPLOG users can obtain a free update of
    the POPLOG system which will include ProGram.  POPLOG runs on
    VAXes under VMS and UNIX, and on Bleasdale BDC 680as under UNIX.
    [RAOK: The Bleasdale is a 68000, POPLOG is on SUNs too by now.]
    Non-educational customers (UK & overseas) who want ProGram with
    POPLOG should order it through System Designers Ltd, Systems
    House, 1 Pembroke Broadway, Camberley, Surrey GU15 3XH.  This
    company makes POPLOG available to educational institutions in
    the USA for 995 dollars.
3.  Academic users of other Prolog systems can obtain a magnetic tape
    in UNIX "tar" format of the Prolog code of the ProGram system
    free, together with a copy of "The Program Manual", provided they
    pay the tape, postage, package, and handling costs (35 pounds).
    Copies can be ordered from Alison Mudd, Cognitive Studies
    Programme, Arts E, University of Sussex, Falmer, Brighton BN1 9QN
    A cheque for 35 pounds made payable to "The University of Sussex"
    should be enclosed with the order.



I have no connection with POPLOG, ProGram, or (save a recent visit
when I picked up the ProGram manual and saw PopLog running on its
home ground) with the University of Sussex.

Just to make sure you realise what ProGram is and isn't, it IS
meant to be a convenient toolkit for *developing* a GPSG grammar,
it is NOT meant to be the world's most efficient parser.  The manual
warns you that "in general, automatic exhaustive parsing with more
than a few rules tends to be slow".  You shouldn't need to know
any Prolog in order to use ProGram.

------------------------------

Date: Friday, 14-Sep-84 21:20:02-BST
From: O'Keefe HPS (on ERCC DEC-10)
Subject: Interpreting Kastner's Preference Rules in Prolog

[Forwarded from the Prolog Digest by Laws@SRI-AI.  This is a declarative
specification of an expert-system interpreter. -- KIL]


I've always been quite impressed by the "EXPERT" stuff being
done at Rutgers, and when I read Kastner's thesis

        Kastner, J.K.
        @i"Strategies for Expert Consultation in Therapy Planning."
        Technical Report CMB-TR-135, Department of Computer Science,
        Rutgers University, October 1983.  (PhD thesis)

I decided to write an interpreter for his rules in Prolog as an
exercise.  The first version just came up with the answer, that's the
stuff that's commented out below.  The second version left behind
information for "explanations":

    chosen(Answer, Reason, WouldHaveBeenPreferred)

Answer was the answer, Reason was the text the rule writer
gave to explain his default ordering of the treatments, and
WouldHaveBeenPreferred are the treatments we'd have preferred
in this ordering if they hadn't been contraindicated

    despite(Answer, Contraindications)

means that Answer was contraindicated by each of the problems
listed, but it was still picked because the preferred choices
had worse problems.

    rejected(Treatment, Contraindications)

means that Treatment was rejected because it had the problems
listed.  Every treatment will be rejected or chosen.  Note: in
these two facts the Contraindications are those which were
checked and found to be applicable, less severe ones may not
have been checked.  (This is a feature, the whole point of the
code in fact.)

You'll have to read Kastner's thesis to see how these rules are used,
but if you're interested in Expert Systems you'll want to read it.

Why have I sent this to the [Prolog] Digest?  Two reasons.  (1) someone
may have a use for it, and if I send it to the library it'll sink without
trace.  (2) I'm quite pleased with the "no-explanations" version, but
the "explanations" version is a bit of a mess, and if anyone can find
a cleaner way of doing it I'd be very pleased to see it.  I guess I
still don't know how best to do data base hacking.

A point which may be interesting: I originally had worst/6 binding its
second argument to 'none' where there were no new contraindications.
The mess which resulted (though it worked) reminded me of a lesson I
thought I'd learned before: it is dangerous to have an answer saying
there are no answers, because that looks like an answer.  All the
problems I had with this code came from thinking procedurally.

:-  op(900, fx, 'Is ').
:-  op(899, xf, ' true').
:-  compile([
        'util:ask.pl',          % for yesno/1
        'util:projec.pl',       % for project/3
        'prefer.pl'             % which follows
    ]).

%   File   : PREFER.PL
%   Author : R.A.O'Keefe
%   Updated: 14 September 1984
%   Purpose: Interpret Kastner's "preference rules" in Prolog

:- public
        go/0,
        og/0.

:- mode
        prefer(-, +, +, +),
        pass(+, +, +, -),
        pass(+, +, +, +, +, +, -),
        worst(+, -, +, +, -, -),
        chose(+, +, +),
        forget(+, +),
        compare←lengths(+, +, -),
        evaluate(+).


prefer(Treatment, Rationale, Contraindications, Columns) :-
        pass(Columns, [], Contraindications, Treatment),
        append(Pref1, [Treatment=←|←], Columns), !,
        project(Pref1, 1, Preferred),
        assert(chosen(Treatment, Rationale, Preferred)).


pass([Tr=Tests|U], Cu, Vu, T) :-
        worst(Tests, Rest, Cu, Vu, Cb, Vb), !,
        pass(U, [Tr=Rest], Cu, Vu, Cb, Vb, T).
pass([T=←|U], C, ←, T) :-
        chose(T, U, C).


pass([], [T=←], ←, ←, C, ←, T) :- !,
        chose(T, [], C).
pass([], B, ←, ←, Cb, Vb, T) :-
        reverse(B, R),
        pass(R, Cb, Vb, T).
pass([Tr=Tests|U], B, Cu, Vu, Cb, Vb, T) :-
        worst(Tests, Rest, Cu, Vu, Ct, Vt),
        compare←lengths(Vt, Vb, R),
        (   R = (<), C1 = Ct, V1 = Vt, B1 = [Tr=Rest], forget(B, Cb)
        ;   R = (=), C1 = Cb, V1 = Vb, B1 = [Tr=Rest|B]
        ;   R = (>), C1 = Cb, V1 = Vb, B1 = B, assert(rejected(Tr,Ct))
        ),  !,          % moved down from worst/6 for "efficiency"
        pass(U, B1, Cu, Vu, C1, V1, T).
pass([T=←|←], B, ←, ←, C, ←, T) :-
        chose(T, B, C).


worst([Test|Tests], Tests, C, [X|V], [X|C], V) :-
        evaluate(Test), !.
worst([←|Tests], Rest, Cu, [←|Vu], Ct, Vt) :-
        worst(Tests, Rest, Cu, Vu, Ct, Vt).


evaluate(fail) :- !, fail.
evaluate(Query) :-
        known(Query, Value), !,
        Value = yes.
evaluate(Query) :-
        yesno('Is ' Query ' true'),
        !,
        assert(known(Query, yes)).
evaluate(Query) :-
        assert(known(Query, no)),
        fail.


chose(Treatment, Rejected, Contraindications) :-
        assert(despite(Treatment, Contraindications)),
        forget(Rejected, Contraindications).


forget([], ←).
forget([Treatment=←|Rejected], Contraindications) :-
        assert(rejected(Treatment, Contraindications)),
        forget(Rejected, Contraindications).


compare←lengths([], [], =).
compare←lengths([],  ←, <).
compare←lengths( ←, [], >).
compare←lengths([←|List1], [←|List2], R) :-
        compare←lengths(List1, List2, R).


/*----------------------------
%  Version that doesn't store explanation information:

prefer(Treatment, Rationale, Contraindications, Columns) :-
        pass(Columns, 0, [], Treatment).


pass([], ←, [T=←], T) :- !.
pass([], ←, B, T) :-
        reverse(B, R),
        pass(R, 0, [], T).
pass([Tr=Col|U], I, B, T) :-
        worst(Col, 1, W, Reduced),
        !,
        (   W > I, pass(U, W, [Tr=Reduced], T)
        ;   W < I, pass(U, I, B, T)
        ;   W = I, pass(U, I, [Tr=Reduced|B], T)
        ).
pass([T=←|←], ←, ←, T).         % no (more) contraindications


worst([], ←, none, []).
worst([Condition|Rest], Depth, Depth, Rest) :-
        evaluate(Condition), !.
worst([←|Col], D, W, Residue) :-
        E is D+1,
        worst(Col, E, W, Residue).

---------------------------------------------------------------*/


antiviral(Which) :-
        evaluate(full←therapeutic←antiviral←dose←recommended),
        prefer(Which, efficiacy,

         [pregnancy, resistance, severe←algy, mild←algy ], [
   ftft =[fail,      rtft,       at3,         at1       ],
   fvira=[fail,      rvira,      av3,         av1       ],
   fidu =[preg,      ridu,       ai3,         ai1       ]  ]).


go :-
        antiviral(X),
        write(X), nl,
        pp(chosen), pp(despite), pp(rejected).

og :-
        abolish(chosen, 3),
        abolish(despite, 2),
        abolish(known, 2),
        abolish(rejected, 2).

------------------------------

End of AIList Digest
********************

∂26-Sep-84  0102	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #125    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Sep 84  01:01:54 PDT
Date: Tue 25 Sep 1984 23:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #125
To: AIList@SRI-AI


AIList Digest           Wednesday, 26 Sep 1984    Volume 2 : Issue 125

Today's Topics:
  Expert Systems - Foster Care Placements,
  LISP - Franz Lisp Help,
  Inductive Proof - The Heap Problem,
  Machine Translation - Natural Languages as Interlinguas,
  Seminars - Semantic Modulation & SOAR Intelligent System
  Administrivia - Current Distribution List
----------------------------------------------------------------------

Date: Sun, 23 Sep 84 22:34 EST
From: Ed Fox <fox%vpi.csnet@csnet-relay.arpa>
Subject: Expert System for Foster Care Placements

One of my students has begun a project as described below.
We are wondering if there are any similar projects that people
would be willing to let us know about.
Many thanks, Ed Fox.

 This expert system will provide assistance to social workers charged with
 finding suitable substitute care placements for children who cannot continue
 to live with their families.  The system's rules will be based on
 expert input from social workers and an analysis of a social service agency's
 case records to determine the constellation of child, natural family, and
 substitute caregivers' characteristics and environmental factors which have
 been associated with successful placements in the past.  Users will be asked
 for descriptive information about the child for whom a placement is being
 sought and his/her family.  System output will provide the social worker with
 a description(s) of substitute care settings which can be expected to suit the
 needs of the particular child and contribute to a successful placement.

------------------------------

Date: 25 Sep 1984 07:59:09-EDT
From: kushnier@NADC
Subject: Help- Franz Lisp


Help!
Does anyone have a good practical guide to Franz LISP running under UNIX
on a VAX ?
Is there a way to list the LISP environment when running the interpreter or
do you have to go in and out using the Unix editors?
Can you save the LISP envirnment to an editor file while you are in LISP?

P.S. I have the Franz LISP manual, but I haven't translated it to English yet.

P.S.S I haven't even figured out what language it's written in.......

                                     Ron Kushnier
                                     kushnier@nadc.arpa

[I'm not sure what's possible under Berkeley Unix (if that's what you
have) since I'm using a VAX EUNICE system.  Our people have rigged the
EMACS editor so that it can be called from Franz, provided that you load
and then suspend EMACS before starting up Franz.  Interpreted functions
can thus be edited and newly edited functions can be run; special editor
macros facilitate this.  4.1BSD Unix lacks the interprocess mechanisms
needed to support this (LEDIT), although EMACS process windows running
Franz are possible; 4.2BSD may be more flexible.

To examine your environment while in Franz, use the pp (pretty-print)
command.  You can certainly save an environment; check out the
dumplisp and savelisp commands.  For a readable Franz tutorial get
Wilensky's new LISPcraft book.  -- KIL]

------------------------------

Date: 19 Sep 84 14:42:49-PDT (Wed)
From: ihnp4!houxm!mhuxj!mhuxn!mhuxl!ulysses!allegra!princeton!eosp1!robison
      @ Ucb-Vax.arpa
Subject: Re: Inductive proof -- the heap problem
Article-I.D.: eosp1.1131


BUT! Human beings continually reason inductively on tiny amounts
of info, often two or even one case!  We have some way of monitoring
our results and taking back some iof the inductions that were wrong.
AI has to get the hang of this some day...

--- Toby Robison

------------------------------

Date: Mon, 24 Sep 84 22:28 EST
From: Sergei Nirenburg <nirenburg%umass-cs.csnet@csnet-relay.arpa>
Subject: natural languages as interlinguas for MT


Re: using a natural language as an interlingua in a machine translation
system

A natural language and an MT interlingua have different purposes and are
designed differently.  An interlingua should be ambiguity-free and should
facilitate automatic reasoning about the knowledge encoded in it.  A natural
language is designed to be used by truly intelligent speakers and hearers, so
that a lot of polysemy, homonymy, anaphoric phenomena, even outright errors
can be put up with -- because the understander is so sophisticated.  Brevity
is at a premium in natural language communication, not clarity.

The most recent attempt to use a language designed for humans as an MT
interlingua is the Dutch researcher A. Witkam's attempt in his DLT machine
translation project.  He plans to use Binary-Coded Esperanto (BCE) as the
interlingua in a planned multilingual MT system.

An analysis of the approach shows that in reality the system involves two
complete (transfer-based) translation modules: 1) Source language to BCE; and
2) BCE to Target language.

Of many points of criticism possible let me mention just that this
approach (in effect, double transfer)  has nothing to do with AI methods.
If transfer is used, it is not clear why an interlingua should be involved at
all.

For some more discussion see Tucker and Nirenburg, "Machine Translation: A
Contemporary View", in the 1984 issue of the Annual Review of Information
Science and Technology.

At the same time, it would be nice to see a technical discussion of the
system by Guzman de Rojas -- is any such thing available?

Sergei

------------------------------

Date: Mon, 24 Sep 1984  15:30 EDT
From: WELD%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - Semantic Modulation

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

        The AI revolving seminar this week is by David McAllester:

        SEMANTIC MODULATION:  A Relevance Based Inference Technique

        The Reasoning Utility Package RUP provides a set of
propositonal inference mechanisms for constructing inference-based
data bases and reasoning systems.  This talk will present new
inference mechanisms which can be incorporated into the RUP
architecture.  These inference mechansisms reason about quantified
formula using a technique based on the "modulation" of the
interpretation of free parameters.  By modulating the interpretation
of free parameters it is possible to perform a wide variety of
quantificational inferences without ever "consing" new formulae.
The semantic modulation inference mechanism relies on a notion
of relevance in propositional reasoning:  when a formula is proven
one can determine a subset of premises relevant to the proof.
The relevant subset is usually smaller than the set of premises actually
used in the proof.  Semantic modulation is also closely related to
the notions of "inheritance" and "virtual copy" used in semantic networks.


Time:           2:00PM          Wednesday Sept. 26  (THIS Wednesday)
Place:          7th Floor Playroom

------------------------------

Date: Tue 25 Sep 84 11:09:13-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - SOAR Intelligent System

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, September 28, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     John Laird,
             Xerox Corp.

ABSTRACT:    SOAR: An Architecture for General Intelligence

I will present recent progress in developing an architecture for general
intelligence, called Soar.   In Soar, all problem solving occurs as
search in a problem space and all knowledge is encoded as production
rules.  I will describe the Soar architecture and then present three
demonstrations of its generality and power.

1. Universal Subgoaling: All subgoals are created automatically by the
architecture whenever the problem solver is unable to carry out the
basic functions of problem solving (so that all subgoals in Soar are
also meta-goals).  All the power of Soar is available in the subgoals,
including creating new subgoals, making Soar a completely reflective
problem solver.

2. A Universal Weak Method: The weak methods emerge from knowledge about
a task instead of through explicit representation and selection.

3. R1-Soar: Although Soar was designed for general problem-solving, it
is also effective in the knowledge-intensive domains of expert systems.
This will be demonstrated by a partial implementation of the R1 expert
system in Soar.

Soar also has a general learning mechanism, called Chunking.  Paul
Rosenbloom will present this aspect of our work at the SIGLunch on
October 5.

------------------------------

Date: Tue 25 Sep 84 14:08:12-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Current Distribution List

SIGART has recently been publishing names of companies involved in AI,
which started me wondering just where AIList goes.  The following are
organizations that I mail to directly, as nearly as I can figure out
from the net names.  In some cases the digest goes to numerous campuses,
departments, or laboratories; in others it goes to a single individual.
AIList also goes to numerous sites through indirect remailings,
particularly through Usenet redistribution.  If anyone would like to
add to my list, please send a brief message to AIList-Request@SRI-AI.ARPA.

GOVERNMENT AND MILITARY:
Admiralty Surface Weapons Establishment
Air Force Institute of Technology Data Automation
Army Armament Research and Development Command
Army Aviation Systems Command
Army Communications Electronics Command
Army Engineer Topographic Laboratory
Army Materiel Systems Analysis Activity
Defense Communications Engineering Center
National Aeronautics and Space Administration
National Library of Medicine
National Research Council Board on Telecomm.-Comp. Applications
National Science Foundation
Naval Air Development Center
Naval Intelligence Processing System Support Activity
Naval Ocean Systems Center
Navel Personnel Research and Development Center
Navel Research Laboratory
Navel Surface Weapons Center
Norwegian Defence Research Establishment

LABORATORIES AND RESEARCH INSTITUTES:
Aerospace Medical Research Laboratory
Brookhaven National Laboratory
Center for Seismic Studies
Center for Studies of Language and Information
Jet Propulsion Laboratory
Lawrence Berkeley Laboratory
Lawrence Livermore Labs
Los Alamos National Laboratory
MIT Lincoln Laboratory
NASA Ames Research Center
Norwegian Telecommunication Administration Research Institute
Oak Ridge National Laboratory
Sandia National Laboratories
USC Information Sciences Institute

CORPORATIONS AND NONPROFIT ORGANIZATIONS:
ACM SIGART
Advanced Computer Communications
Advanced Information and Decision Systems
Bolt Beranek and Newman Inc.
Compion Corp.
Digital Equipment Corp.
Ford Aerospace and Communications Corp.
GTE Laboratories
General Motors Research
Hewlett-Packard Laboratories
Honeywell, Inc.
Hughes Research
IntelliGenetics
International Business Machines
Kestrel Institute
Linkabit
Litton Systems
Logicon, Inc.
Marconi Research Centre, Chelmsford
Northrop Research Center
Perceptronics
Philips
Rome Air Development Center
SRI International
Science Applications, Inc.
Software A&E
Tektronix, Inc.
Texas Instruments
The Aerospace Corporation
The MITRE Corporation
The Rand Corporation
Tymshare
Xerox Corporation

UNIVERSITIES:
Boston University
Brandeis University
Brown University
California Institute of Technology
Carnegie-Mellon University
Clemson University
Colorado State University
Columbia University
Cornell University
Georgia Institute of Technology
Grinnell College
Harvard University
Heriot←Watt University, Edinburgh
Louisiana State University
Massachusetts Institute of Technology
New Jersey Institute of Technology
New York University
Oklahoma State University
Rice University
Rochester University
Rutgers University
St. Joseph's University
Stanford University
State University of New York
University College London
University of British Columbia
University of California (Berkeley, Davis, UCF, UCI, UCLA, Santa Cruz)
University of Cambridge
University of Delaware
University of Edinburgh
University of Massachusetts
University of Michigan
University of Minnesota
University of North Carolina
University of Pennsylvania
University of South Carolina
University of Southern California
University of Tennessee
University of Texas
University of Toronto
University of Utah
University of Virginia
University of Washington
University of Wisconsin
Vanderbilt
Virginia Polytechnic Institute
Yale University

------------------------------

End of AIList Digest
********************

∂27-Sep-84  0258	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #126    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 Sep 84  02:58:07 PDT
Date: Wed 26 Sep 1984 22:44-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #126
To: AIList@SRI-AI


AIList Digest           Thursday, 27 Sep 1984     Volume 2 : Issue 126

Today's Topics:
  AI & Business - Literature Sought,
  Expert Systems - Critique Pointer & Teknowledge's M.1
----------------------------------------------------------------------

Date: 20 Sep 84 10:29:09-PDT (Thu)
From: hplabs!sdcrdcf!trwrb!trwspp!jensen @ Ucb-Vax.arpa
Subject: AI for Business

   Article-I.D.: trwspp.582

I hope that I can obtain a list of resources that  apply  AI
techniques  to  business.   Such  resources  would  include:
research  bulletins,  software,  books,   and   conferences.
Awhile  back,  I  recall  an  AI  for Business Summary being
offered, perhaps one of you still has a copy lying around on
disk.   I  will  pass on submissions to requesters, via mail
rather than a net posting.

Thank you very much for your assistance.
James Jensen

[I believe that Syming%B.CC@Berkeley is keeping an AI for Business
summary, as well as a list of interested individuals.  This is
still a suitable topic for AIList, of course.  -- KIL]

------------------------------

Date: Wed, 26 Sep 1984  10:25 EDT
From: Chunka Mui <CHUNKA%MIT-OZ@MIT-MC.ARPA>
Subject: request for info: commercialization of ai


Has anyone seen a report entitled "Commercial Applications of Expert
Systems"?  The author is Tim Johnson and it is put out by a company in
London named OVUM.  I'm wondering what perspective the report is
written from and whether or not it is worth tracking down.  Replies
can be sent directly to me at Chunka%mit-oz@mit-mc if general interest
in the topic does not exist.  Thanks,

                                      Chunka Mui

------------------------------

Date: Wed 26 Sep 84 03:44:35-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: A "populist's" view - Jerry Pournelle comments.

in Popular Computing, Nov 84, p 59, Jerry writes in his column THE MICRO
REVOLUTION about ARTIFICIAL EXPERTS: The computer as diagnostician has definite
limits.

worth reading as Jerry (love him or hate him) is a sharp and insightful
'populist' (consider this a compliment), who tries to bridge the gap
between experts and academia and is doing a credible job at it.  If you
keep a folder with informative articles about AI, especially with emphasis
on medical applications, you'll want to add this one.

------------------------------

Date: Tue 25 Sep 84 20:06:36-PDT
From: JKAPLAN@SRI-KL.ARPA
Subject: Clarification Regarding Teknowledge's M.1 Product

I recently learned that an article by John Dvorak criticizing our M.1
product in the San Francisco Chronicle 7/29/84 was reproduced and
distributed to the AIlist.  This article presented a distorted and
factually incorrect picture of the Teknowledge product.  The author
made no attempt to contact us for information prior to publishing the
article and as far as we know, has not seen the product. The article
appears to be based solely on information from a brochure, and
hearsay.

Based on the tone and content of the article, it was apparently
written primarily for entertainment value, and so we decided it would
not be fruitful to draft a formal reply.  However, the AIlist might be
interested in a response.  [I added a note to the original article
requesting such a response.  -- KIL]

First about M.1 -

M.1 is a  knowledge engineering tool that enables technical
professionals without prior AI experience to build rule-based
consultation systems. It is designed for rapid prototyping of
large-scale applications, as well as building small-scale systems. The
product includes a four-day hands-on course, extensive documentation,
sample systems, training materials, one year of "hot-line" support,
and maintenance.

M.1 contains a variety of advanced features. Some of interest to the
AIlist types include: certainty factors; a multi-window interactive
debugging environment; explanation facility; list processing; single-
and multi-valued attributes; variables; dynamic overlays of the
knowledge base during consultations; presupposition checking;
and automatic answer "completion". However, the system was carefully
designed so that it can be learned incrementally, i.e. the beginner
doesn't have to understand or use these features.

An initial CPU costs $12,500 (not $12,000 as stated in the article),
which includes training. Additional licenses costs $5,000 with
training, and $2500 without.

Strategically, M.1 fills a gap between mainframe- or lisp
machine-based tools for AI professionals, and a variety of less
sophisticated systems available to hobbyists.

Turning to the article -


Dvorak makes basically three points:

1. The program is overpriced for personal computer software.

2. The program gives bad advice about wine.

3. Expert systems are too complex to run on micros, at least with M.1.

Let me respond briefly to  each point.

1. M.1 is not targeted to "personal computer owners" the way Wordstar
and VisiCalc are.  M.1 is not intended, nor is it suitable for, mass
distribution.  While M.1 can be used effectively without a graduate
degree in artificial intelligence, it is still quite a distance from
business productivity tools (such as Lotus 1-2-3) for non-technical
computer users.

Rather, it is a tool for technical professionals.  We decided to host
the system on the IBM Personal Computer rather than the VAX or other
environments because (a) we believed this would be more convenient for
our target customers, and (b) it was technically possible without
compromising the product.

M.1 is priced consistent with similar systems that run on the IBM
Personal Computer, such as CAD/CAM tools, or modelling and simulation
packages.  These systems typically appeal to a specialized audience,
and come with extensive training and support (as does M.1).

Our customers and the trade press understand the value of and
rationale for such systems. Some members of the popular and business
press do not. When we receive inquiries from these latter groups, we
explain the product positioning and provide appropriate references and
data points. We did not have this opportunity with Mr. Dvorak.

2. M.1 comes with a variety of sample knowledge systems, that
illustrate various M.1 features and suggest potential areas of
application.  Skipping past extensive consultations in the M.1 brochure
with a Bank Services Advisor and a Structural Analysis Consultant, Mr.
Dvorak reprints an edited transcript of a sample system that provides
Wine Advice, in an attempt to ridicule the quality of the product.

In our brochure, the purpose of the brief wine advisor example is to
illustrate that the user's preferences can be taken into account in a
consultation, and that the user can change his or her mind part way
through a consultation. Initially, the user specifies a preference for
red wine, despite the fact that the meal contains poultry. The M.1
knowledge base naturally recommends a set of red wines.  Mr. Dvorak's
version of the consultation stops at this point. In the balance of the
consultation, the user changes to moderately sweet white wines, and is
advised to try chardonnay, riesling, chenin blanc, or soave.

While it may occasionally provide controversial advice, the wine
advisor sample systems was reviewed by two California wine
experts before release, who felt that its advice was quite reasonable.

3.  Regarding Mr. Dvorak's final point, he is simply wrong. Micros in
general, and M.1 in particular, are powerful enough to solve high
value knowledge engineering problems.  Approximately 200 knowledge
base entries (facts and rules) can be loaded at any one time, and can
be overlayed dynamically if larger knowledge bases are required,
making the only practical limit the amount of disk storage. Through
the use of variables and other representational features, the language
is more concise and powerful than most of its predecessors.  Practical
systems such as the Schlumberger Dipmeter Advisor and the PUFF system
at the Pacific Medical Center in San Francisco use knowledge bases
that could fit easily within the M.1 system without overlays.

For pedagogical purposes, we reimplemented a subset of SACON, a system
originally developed at Stanford University using EMYCIN, as a sample
system.  SACON provides advice to structural engineers on the use a
complex structural analysis Fortran program.  Our sample system
demonstrates that M.1 has sufficient functionality at reasonable speed
to accomplish this task. (The current version does NOT contain the
entire original knowledge base - time and project resource constraints
precluded our doing a complete translation. It includes all questions
and control rules, which account for about 50% of the original system,
but only about half of the judgmental rules, using no overlays. The
reimplementation can run the standard consultation examples from the
SACON literature.)



AIlist readers may be interested to know that M.1 has been selling
very well since its introduction in June. Our customers have been
extremely pleased with the system - many have prototyped serious
applications in a short period of time after taking the course, and at
a cost far below their available alternatives.

For more serious reviews of M.1, may I refer you to

Rosann Stach
Manager of Corporate Development and Public Relations
Teknowledge Inc
525 University Ave
Palo Alto, CA
415-327-6600


                                Jerry Kaplan
                                Chief Development Officer
                                Teknowledge

------------------------------

End of AIList Digest
********************

∂28-Sep-84  0103	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #127    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Sep 84  01:02:04 PDT
Date: Thu 27 Sep 1984 23:49-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #127
To: AIList@SRI-AI


AIList Digest            Friday, 28 Sep 1984      Volume 2 : Issue 127

Today's Topics:
  Computer Music - Mailing List,
  Expert Systems - Windows,
  Machine Translation - Natural Languages as Interlinguas,
  Natural Language - Idioms,
  Logic - Induction and Deduction,
  Seminar - Anatomical Analogy for Linguistics
----------------------------------------------------------------------

Date: 26 September 1984 1043-EDT
From: Roger Dannenberg at CMU-CS-A
Subject: Computer Music Mailing List

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

        If you are interested in announcements pertaining to computer music
(such as the one you are reading), send mail to Dannenberg@cmu-cs-a and
I'll put you on my mailing list.
        First announcement: there will be a seminar on Monday, October 8,
from 11 to 1 with pre-presentations of 3 talks from the 1984 International
Computer Music Conference.  Please let me know if you plan to attend.

------------------------------

Date: Thu 27 Sep 84 10:09:16-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Windows and Expert Systems

Has anyone else become bothered by the recent apparent equation between
window packages and expert system tools?  The recent spiel on Teknowledge's
M.1 takes care to mention that it provides windows (along with other features).
However, other vendors (for instance all of those at the recent AAAI) seem
to emphasize their window and menu capabilities at the expense of actual
reasoning capacity.  Recent papers on expert systems at both AAAIs and IJCAIs
include the obligatory picture of a screen with all the capabilities being
shown at once (even if they're not really related to the paper's content).
What's going on?
Does a window system really have something substantial to offer expert systems
development?  If so, what is it?  Ultra-high bandwidth for display, so that
the system doesn't have to decide what the user wants to see - it just shows
everything?  Do people get entranced by all the pretty pictures?  Ease of
managing multiple processes (what expert system tools can even employ multiple
communicating processes)?  We've got zillions of machines with window systems
around here, but they seem supremely irrelevant to the process of expert
system development (perhaps because I tend to regard a system that requires
only low-bandwidth communication to be more inherently intelligent - it has
to do more inference to supply missing information).  Can anyone give a solid
justification for windows being an essential part of an expert systems tool?
(Please no one say anything about it being easier to sell tools with flashy
graphics...)

                                                        stan shebs

------------------------------

Date: 26 Sep 1984 09:33-PDT (Wednesday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: natural languages as interlinguas for MT

        Sergia Nirenburg's statement that "a natural language and an
MT interlingua have different purposes and are designed differently"
is false and reveals an incorrect premise underlying much linguistic and
AI research.  There is a natural language which was spoken between
1000 B.C. and 1900 A.D. which was used amongst a scientific community,
and which was ambiguity free(in some senses syntax-free) and which
fascilitated automatic inference.  Instead of saying "John gave Mary
a book" these scientists would say "there was a giving event, having as
agent John, who is qualified by singularity...etc".
        I have shown this well-developed system to be equivalent to
certain semantic net systems, and in some cases the ancient language
is even more specific.
        The language is an obscure branch of Indo-Iranian of which there
are no translations, but the originals are extant.
        Natural languages CAN serve as interlingua.

Rick Briggs
briggs@riacs

------------------------------

Date: Thu 27 Sep 84 10:58:36-CDT
From: David Throop <LRC.Throop@UTEXAS-20.ARPA>
Subject: Re: Having no crime rate & other text curiosities

  Continuing the consideration of texts that contain mistakes but are still
comprehensible:
  Another example, this from the Summer '84 issue of Foreign Affairs (p 1077):

  "In nine months... the [Argentine] peso fell in value by more than 400
percent."

------------------------------

Date: 9 Sep 84 10:06:00-PDT (Sun)
From: hplabs!hp-pcd!hpfclk!fritz @ Ucb-Vax.arpa
Subject: Re: Inductive Proof - The Heap Problem
Article-I.D.: hpfclk.75500005

    As an example of improper induction, consider the heap problem.
    A "heap" of one speck (e.g., of flour) is definitely a small heap.
    If you add one speck to a small heap, you still have a small heap.
    Therefore all heaps are small heaps.
                                        -- Ken Laws

That's a little like saying, "The girl next to me is blonde.  The
girl next to her is blonde.  Therefore all girls are blonde."  (Or,
"3 is a prime, 5 is a prime; therefore all odd numbers are prime.")

An observation of 2 (or 3, or 20, or N) samples does *not* an inductive
proof make.  In order to have an inductive proof, you must show that
the observation can be extended to ALL cases.

    [I disagree with Gary's analysis of the flaw.  I didn't say "if
    you add one speck to a one-speck heap", I said that you could add
    one speck to a (i.e., any) small heap.  -- KIL]


Mathematician's proof that all odd numbers are prime:
  "3 is a prime, 5 is a prime, 7 is a prime; therefore, by INDUCTION,
  all odd numbers are prime."

Physicist's proof:
  "3 is a prime, 5 is a prime, 7 is a prime,... uhh, experimental error ...
   11 is a prime, 13 is a prime, ...."

Electrical Engineer's proof:
  "3 is a prime, 5 is a prime, 7 is a prime, 9 is a prime, 11 is a prime..."

Computer Scientist's proof:
  "3 is a prime, 5 is a prime, 7 is a prime,
                               7 is a prime,
                               7 is a prime,
                               7 is a prime,
                               7 is a prime, ..."

Gary Fritz
Hewlett Packard Co
{ihnp4,hplabs}!hpfcla!hpfclk!fritz

------------------------------

Date: Wed 26 Sep 84 10:42:28-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Re: Induction

There's another name for "induction" on one case: generalization.  Lenat's
AM and the Boyer-Moore theorem prover are both capable of doing
generalizations, and there are probably others that can do it also.
Not too hard really;  if you've set up just the right formalism,
generalization amounts to easily-implemented syntactic mutations (now
all we need is a program to come up with the right formalisms!)

                                                        stan shebs

------------------------------

Date: 17 Sep 84 9:03:48-PDT (Mon)
From: hplabs!pesnta!scc!steiny @ Ucb-Vax.arpa
Subject: Re: induction vs. deduction
Article-I.D.: scc.156

A point about logical induction that has not come up is
what Charles Sanders Peirce (who coined the
term "pragmatism") argued that one could never prove
anything inductively.  We believe that any human will die
eventurally and we reason that is so inductively.
We do not, however, have records on every human that has
ever existed, and humans that are still alive offer
no evidence to support the statement "all humans die".

        Peirce (being pragmatic), did not think we should
throw away the principle just because we can't prove anything
with it.  He suggested renaming it "reduction" (and renaming
deduction "abduction").  This  would leave the word
"induction" available to those special cases where
we do have all the evidence.
--
Don Steiny - Personetics @ (408) 425-0382
109 Torrey Pine Terr.
Santa Cruz, Calif. 95060
ihnp4!pesnta  -\
fortune!idsvax -> scc!steiny
ucbvax!twg    -/

------------------------------

Date: Wed, 26 Sep 84 17:27:31 pdt
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Seminar - Anatomical Analogy for Linguistics

                BERKELEY COGNITIVE SCIENCE PROGRAM
                            Fall 1984
              Cognitive Science Seminar -- IDS 237A

       TIME:                Tuesday, October 2, 11 - 12:30
       PLACE:               240 Bechtel Engineering Center
       DISCUSSION:          12:30 - 2 in 200 Building T-4

   SPEAKER:        Jerry Sadock, Center for the Advanced  Study
                   in   the  Behavioral  Sciences;  Linguistics
                   Department, University of Chicago

   TITLE:          Linguistics as Anatomy

   ABSTRACT:       The notion of modularity in linguistic  sys-
                   tems  is often supported by invoking an ana-
                   tomical metaphor in which the  various  sub-
                   systems  of the grammar are the analogues of
                   the organs of the body.  The primitive  view
                   of  anatomy  that  is employed supposes that
                   the organs are entirely separate in internal
                   structure, nonoverlapping in function, shar-
                   ply  distinguished  from  one  another,  and
                   entirely autonomous in their internal opera-
                   tion.

                   There is a great deal of suggestive evidence
                   from  language  systems  that  calls many of
                   these assumptions into  question  and  indi-
                   cates  that there are transmodular `systems'
                   that form part of the internal structure  of
                   various  modules,  that there is a good deal
                   of redundancy of function between  grammati-
                   cal  components,  that the boundaries of the
                   modules are unsharp, and that  the  workings
                   of  one module can be sensitive to the work-
                   ings of another.  These facts do  not  speak
                   against  either the basic notion of modular-
                   ity of grammar or  the  anatomical  analogy,
                   but  rather  suggest  that  the structure of
                   grammatical systems is to be compared with a
                   more  sophisticated view of the structure of
                   physical organic systems than has been popu-
                   larly employed.

                   The appropriate analogy is not only biologi-
                   cally more realistic, but also holds out the
                   hope of yielding better accounts of  certain
                   otherwise    puzzling    natural    language
                   phenomena.

------------------------------

End of AIList Digest
********************

∂01-Oct-84  1132	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #128    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 Oct 84  11:31:40 PDT
Date: Mon  1 Oct 1984 10:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #128
To: AIList@SRI-AI


AIList Digest             Monday, 1 Oct 1984      Volume 2 : Issue 128

Today's Topics:
  Education - Top Ten Graduate Programs,
  Natural Language - ELIZA source request,
  AI Tools - OPS5 & VMS LISPs & Tektronix 4404 AI Machine,
  Bindings - Syntelligence,
  AI Survey - Tim Johnson's Report,
  Expert Systems - John Dvorak's Column & Windows,
  Knowledge Representation - Generalization,
  Machine Translation - Natural Languages as Interlingua
----------------------------------------------------------------------

Date: 22 Sep 84 0:39:37-PDT (Sat)
From: hplabs!sdcrdcf!sdcsvax!daryoush @ Ucb-Vax.arpa
Subject: Top Ten
Article-I.D.: sdcsvax.79

What are the top ten graduate programs in AI?
MIT is first I suppose.

--id

------------------------------

Date: 24 Sep 84 13:41:12-PDT (Mon)
From: hplabs!hpda!fortune!amd!dual!zehntel!zinfandel!berry @ Ucb-Vax.arpa
Subject: Humor - Top Ten
Article-I.D.: zinfande.199

    What are the top ten graduate programs in AI?
                                -- Karyoush Morshedian

To the best of my knowledge, NO AI program has ever graduated from an
accredited degree-granting institution , though I do know of a LISP
program that's a Universal Life Church minister.....


Berry Kercheval         Zehntel Inc.    (ihnp4!zehntel!zinfandel!berry)
(415)932-6900

------------------------------

Date: 26 Sep 84 18:21:17-PDT (Wed)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Top Ten
Article-I.D.: wdl1.437

     The Stanford PhD program probably ranks in the top 10.  (The MS
program is much weaker).

------------------------------

Date: 29 Sep 84 17:39:34-PDT (Sat)
From: hplabs!hao!seismo!umcp-cs!koved @ Ucb-Vax.arpa
Subject: Re: ELIZA source request
Article-I.D.: umcp-cs.171

I would also like a copy of ELIZA if someone could send it to me.
Thanks.

Larry
koved@umcp-cs or koved@maryland.arpa

Spoken: Larry Koved
Arpa:   koved.umcp-cs@CSNet-relay
Uucp:...{allegra,seismo}!umcp-cs!koved

------------------------------

Date: 26 Sep 84 18:21:31-PDT (Wed)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Info needed on OPS5
Article-I.D.: wdl1.438

    OPS5 runs in Franz Lisp on the VAX, and can be obtained from
Charles Forgy at CMU.  It can be obtained via the ARPANET, but an agreement
must be signed first.

------------------------------

Date: 26 Sep 84 18:21:46-PDT (Wed)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: VMS LISPS
Article-I.D.: wdl1.439

     And then, there is INTERLISP-VAX, the Bulgemobile of language systems.

------------------------------

Date: 27 Sep 84 10:12:10-PDT (Thu)
From: hplabs!tektronix!orca!iddic!rogerm @ Ucb-Vax.arpa
Subject: Tektronix 4404 AI Machine
Article-I.D.: iddic.1822

For information on the 4404 please contact your nearest Tektronix AIM Sales
Specialist; Tektronix Incorporated.

    Farwest:  Jeff McKenna
              3003 Bunker Hill Lane
              Santa Clara, CA 95050
              (408) 496-496-0800

    Midwest:  Abe Armoni
              PO Box 165027
              Irving, TX. 75016
              (214) 258-0525

  Northwest:  Gary Belonzi
              482 Bedford St.
              Lexington, MA. 02173
              (617) 861-6800

  Southeast:  Reed Phillips
              Suite 104
              3725 National Drive
              Raleigh, NC. 27612
              (919) 782-5624

This posting is to relieve tekecs!mako!janw from fielding responses that she
doesn't have time to answer after her initial posting several weeks ago.

Thank you.

------------------------------

Date: Fri 28 Sep 84 13:29:11-PDT
From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
Subject: NEW ADDRESS FOR SYNTELLIGENCE

           [Forwarded from the SRI bboard by Laws@SRI-AI.]


Syntelligence is pleased to announce their new Headquarters at

                           100 Hamlin Court
                            P.O. Box 3620
                         Sunnyvale, CA 94088
                             408/745-6666

Effective September 1, 1984.

------------------------------

Date: Thu 27 Sep 84 08:18:14-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Tim Johnson's Report

On AIList today, I saw where someone was asking about the report Tim Johnson
did on Commercial Applications of AI.  It was produced by Ovum Ltd. in England
and is available for about $350 from a place in Portola Valley.  I have the
address at home and can send that to you later.  The report covers AI
research and applications in the USA and UK but also covers the larger research
projects worldwide.  It is a well written and researched report.

Harry Llull

------------------------------

Date: 28 Sep 1984 15:15:09-PDT
From: smith%umn.csnet@csnet-relay.arpa
Subject: John Dvorak as an information source

  I assume that the John Dvorak who wrote the critique of M.1. is the same
one that writes a weekly column in InfoWorld.  He is not what I would consider
a reliable source of technical information about computers.  His columns
usually consist of gossip and unsupported personal opinion.  What he writes
can be interesting but I like to see facts once in a while, too.  I've read
exactly one good column of his -- it was about computer book PUBLISHING rather
than about computers or software.  He looks to me like a talented individual
who spends too much time out of his league, but is respected for it anyway.
This is common in the 'popular' computer media these days, I guess.

Rick.

------------------------------

Date: Fri 28 Sep 84 10:44:28-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Windows and Expert Systems

Reply to Sheb's flames:

No there is no direct relationship between window systems and expert
systems.  However, the goal of these vendors is to sell software
systems that make it easy to CONSTRUCT, DEBUG, and USE expert systems.
We know that high bandwidth between programmer and program makes it
easier to construct and maintain a program.  Similarly, high bandwidth
(properly employed) makes it easier to use a program.  The goal is to
reduce the cognitive load on the user/programmer, not to strive for
maximizing the cognitive load on the program.

Good software is 90% interface and 10% intelligence.

--Tom

------------------------------

Date: Fri 28 Sep 84 11:04:58-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Generalization

Reply to Shebs' other flame:

"Induction...Not too hard really;"

Shebs comments are very naive.  Of course it isn't too hard to
construct a MECHANISM that sometimes performs inductive
generalizations properly.  However, every mechanism developed thus far
is very ad hoc.  They all rely on "having the right formalism".  In
other words, the programmer implicitly tells the program how to
generalize.  The programmer communicates a set of "biasses" or
preferences through the formalism.  Many of us working in inductive
learning suspect that general techniques will not be found until we
have a THEORY that justifies our generalization mechanisms.  The
justification of induction appears to be impossible.  Appeals to the
Principle of Insufficient Reason and Occam's Razor just restate the
problem without solving it.  In essence, the problem is: What is
rational plausible inference?  When you have no knowledge about which
hypothesis is more plausible, how do you decide that one hypothesis IS
more plausible?  A justification of inductive inference must rely on
making some metaphysical assertions about the nature of the world and
the nature of knowledge.  A justification for Occam's razor, for
example, must show why syntactic simplicity necessarily corresponds to
simplicity in the real world.  This can't be true for just any
syntactic representation!  For what representations is it true?

--Tom

------------------------------

Date: Fri 28 Sep 84 14:58:48-PDT
From: Bill Poser <POSER@SU-CSLI.ARPA>
Subject: Natural languages as MT interlingua

I would like to hear more about the language mentioned by
briggs@riacs as a natural language suitable for use as an MT
interlanguage. Specifically, what is it called and where is
it documented? Where did he publish his demonstration that
it is equivalent to certain kinds of semantic nets?
I would also be interested to hear in what sense he means that
it is a natural language. Virtually all known natural languages
are ambiguous, in the sense that they contain sentences that are
ambiguous, but that does not mean that they cannot be used unambiguously.
An example is the use of English in mathematical writing-it is
possible to avoid ambiguity entirely by careful choice of syntax
and avoidance of anaphora. I wonder whether briggs' language is not
of the same sort-a natural language used in a specialized and restricted
way.

                                        Bill Poser
                                        (poser@su-csli,poser@su-russell)

------------------------------

Date: Fri, 28 Sep 84 15:14:41 PDT
From: "Dr. Michael G. Dyer" <dyer@UCLA-LOCUS.ARPA>
Subject: Natural Languages

A recent comment was made that natural languages can serve as an
interlingua.  I disagree.  There's an ancient language used by scientists
to communicate that's called "mathematics"... but is that a
"natural" language?   Natural languages have certain features, namely,
ambiguity, reference to complex conceptualizations regarding human
affairs, and abbreviated messages (that is,  you only say a tiny bit
of what you mean,  and rely on the intelligence of the listener to
combine his/her knowledge with the current context to reconstruct
everything you left out).  If that ancient language spoken by Iranian
scientists was unambiguous and unabbreviated,  then it's probably
about as "natural" as mathematics is as a language.  Then, also, there's
LOGLAN,  where,  when you say (in it) "every sailor loves some woman",  you
specify whether each sailor has his own woman or whether everyone
loves the same woman.  Fine,  but I'd hate to have to use it as an
everyday "natural" language for gettting around.  Natural languages
are complicated because people are intelligent.  The job of AI NLP
researchers is to gain insight into natural languages (and the cognitive
processes which support their comprehension) by working out  mappings
from natural languages into formal systems (i.e., realizable on stupid
machines).  It's hard enough mapping NL into something unambiguous
without mapping it into a language that itself must be parsed to remove
ambiguities and to resolve contextual references, etc.  It's conceivable
that a system could parse by a sequence of mappings into a sequence of
slightly more formal (i.e., less "natural") intermediate languages.  But then
disambiguation, etc., would have to be done over and over again.  Besides,
people don't seem to be doing that.   Natural languages and formal languages
serve different purposes.  English is currently used as an "interlingua"
by the world community,  but that is using the term "interlingua" in a
different sense.  The interlingua we need for NLP research should not
be "natural".

------------------------------

End of AIList Digest
********************

∂02-Oct-84  1108	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #129    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 Oct 84  11:08:05 PDT
Date: Tue  2 Oct 1984 09:18-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #129
To: AIList@SRI-AI


AIList Digest            Tuesday, 2 Oct 1984      Volume 2 : Issue 129

Today's Topics:
  Bindings - Syntelligence Address Correction,
  Induction - Induction on One Case,
  Machine Translation - Sanskrit,
  Humor - Onyx BC8820 Stone Block Reader,
  Seminar - Learning in SOAR,
  Conference - Knowledge-Based Command and Control
----------------------------------------------------------------------

Date: 01 Oct 84  1144 PDT
From: Russell Greiner <RDG@SU-AI.ARPA>
Subject: Syntelligence: Address Correction

Syntelligence, an AI company specializing in building
expert systems for business applications, has just moved.
Its new address and phone number are

        Syntelligence
        1000 Hamlin Court          [not 100]
        PO Box 3620
        Sunnyvale, CA 94088
        (408) 745-6666

Dr Peter Hart, its president, can also be reached as
HART@SRI-AI.arpa.  (This net address should only be used for
professional (e.g., AAAI related) reasons.)

------------------------------

Date: Mon 1 Oct 84 14:10:23-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Re: Induction on One Case

(My my, people seem to get upset, even when I think I'm making
 noncontroversial statements...)

It wasn't clear whether Tom Dietterich (and maybe others) understood
my remark on induction.  I was merely pointing out that "induction on
one case" is indistinguishable from "generalization".  Simple-minded
generalization IS easy.  Suppose I have as input a Lisp list (A B),
(presumably the first in a stream), and I tell my machine to create
some hypotheses about what it expects to see next.  Possible hypotheses
are:

  (A B)         - the machine expects to see (A B) forever
  (?X B)        - the machine expects to see 2nd element B
  (A ?X)        - similarly
  (?X ?Y)       - 2-element lists

Since these are lists, presumably one could get more elaborate...

  (?X ?Y optional ?Z)
  ...

And end up with "the most general hypothesis":

  ?X

All of these patterns can be produced just by knowing how to form
Lisp lists;  I don't think there's any hidden assumptions or biases
(please enlighten me if there are).  I would say that in general,
one can exhaustively generate all hypotheses, when the domains
are completely specified (i.e. a pattern like (<or A B> B) for the
above example has an undefined entity "or" which has nothing to do
with Lisp lists; one would have to extend the domains in which one
is operating).  Generating hypotheses in a more reasonable order is
completely domain-dependent (and no general theory is known).

Getting back to the example, all of the hypotheses are equally
plausible, since there is only one case to work from (unless one
wants to arbitrarily rank these hypotheses somehow; but none can
be excluded at this point).

I agree that selecting representations is very hard; there's not
even any consensus about what representations are useful, let alone
about how to select an appropriate one in particular cases.

(Have I screwed up anywhere in this?  I really wasn't intending
to flame...)

                                                stan shebs

------------------------------

Date: 1 Oct 1984 16:01-PDT (Monday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Sanskrit

        In response to the flood of messages I recieved concerning the
ambiguity-free natural language, here is some more information about it.
        The language is a branch of Sastric Sanskrit which flourished
between the 4th century B.C and 4th century A.D., although its
beginnings are somewhat older.  That it is unambiguous is without
question.  (I am writing two papers, one for laymen and one for those with
AI background).  A more interesting question is one posed by Dr. Michael
Dyer, that is "is it a natural language?".
        The answer is yes, it is natural and it is unambiguous.  It
would be difficult to call a language living and spoken for over a
millenium with as rich a literature as this langauge has anything but a
natural language.  The problem is that most (maybe all) of us are used
to languages like English (one of the worst) or other languages which
are so poor as vehicles of transmission of logical data.  We have
assumed that since all languages known have ambiguity, that it is
a necessary property of natural languages, but there is no reason to
make this assumption.  The complaint that it is awkward to speak
with the precision required to rule out ambiguity is one based on
(I would guess) the properties of Engish or other common Indo-European
languages.
        If one were to take a specific formulation such as a semantic
net and "read" it in English the result is a cumbersome mass of
detail which nobody would be willing to use in ordinary communication.
However, if one were to take that same semantic net and translate it
into the language I am studying you get (probably) one very long word
with a series of affixes which convey very compactly the actual meaning
of the semantic net.  In other words, translations from this language
to English are of the same nature as those from a semantic net to
English (hence the equivalence to semantic nets), one compact structure
to a long paragraph.
        The facility and ease with which these Indians communicated
indicates that it is possible for a natural language to serve all
purposes of artificial languages based on logic.  If one could say
what one wishes to say with absolute clarity (although with apparent
redundancy) in the same time and with the same ease as you say
part of what you mean in English, why not do so?  And if a population
actually got used to talking in this way there would be much more
clarity and less confusion in our communication.  Sastric Sanskrit
allows you to say WHAT YOU MEAN without effort.  The questions
"Can you elaborate on that?" or "What exactly are you trying to say?"
would simply not come up unless the hearer wished to go to a deeper
level of detail.
        This language was used in much the same way as language found
in technical journals today.  Scientists would communicate orally
and in writing in this language.  It is certainly a natural language.
        As to how this is accomplished, basically SYNTAX IS ELIMINATED.
Word order is unimportant, speaking is thus comparable to adding a
series of facts to a data-base.
        What interests me about this language is:
        1) Many theories derived recently in Linguistics and AI were
           independently in use over a thousand years ago, without
           computers or any need to eliminate ambiguity except for
           precise thinking and communication
        2) A natural language can serve as a mathematical (or artificial
           language) and thus the dichotomy between the two is false.
        3) There are methods for translating "regular" Sanskrit into
           Sastric Sanskrit from which much could be learned from NLP
           research.
        4) The possibilities of this language serving as interlingua
           for MT.

        There are no translated texts and it takes Sanskrit experts a
very long time to analyze the texts, so a translation of a full work
in this language is a way off. However, those interested can get
a hold of "Vaiyakarana-Siddhanta-Laghu-Manjusa" by Nagesha Bhatta.

Rick Briggs
NASA Ames

------------------------------

Date: Thu, 27 Sep 84 16:05:37 edt
From: Walter Hamscher <walter@mit-htvax>
Subject: Onyx BC8820 Stone Block Reader

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

Professor Petra Hechtman of the Archaeology Dept has an Egyptian
tombstone written in Hieroglyphs on an Onyx C8002 system running
ONYX IV.II that he needs to read.  The Onyx system that the
block was written with has died (legend has it that it is archived
in the temple of Tymsharin).  He needs to get the data off the
rock soon so that the exact date of Graduate Student Lunches can
be calculated (the most recent prediction fixes the date of the
next "bologna eclipse" as Friday the 28th at noon in the Third Floor
Playroom, hosted by David "Saz" Saslov and Mike "Mpw" Wellman).
According to Data Gene-rock, the original Filer was 1/4 cubit,
6250 spd (strokes per digit), 90 RAs, up to 10K BC.  Anyone who has,
knows of, or has chips off the original device that might be
able to decipher the stone, please contact Prof. Hechtman at
x5848, or at /dev/null@mit-htvax.

------------------------------

Date: Mon 1 Oct 84 10:25:14-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Learning in SOAR

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, October 5, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Paul S. Rosenbloom
             Assistant Professor

ABSTRACT:    Towards Chunking as a General Learning Mechanism

Chunks have long been proposed as a basic organizational unit for
human memory.  More recently chunks have been used to model human
learning on simple perceptual-motor skills.  In this talk, I will
present recent progress in extending chunking to be a general learning
mechanism by implementing it within a general problem solver.
Combining chunking with the SOAR problem-solving architecture
(described by John Laird in the SigLunch of September 28) we can take
significant steps toward a general problem solver that can learn about
all aspects of its own behavior.  The combination of a simple learning
mechanism (chunking) with a sophisticated problem-solver (SOAR)
yields: (1) practice speed-ups, (2) transfer of learning between
related tasks, (3) strategy acquisition, (4) automatic
knowledge-acquisition, and (5) the learning of general macro-operators
of the type used by Korf (1983) to solve Rubik's cube.  These types of
learning are demonstrated for traditional search-based tasks, such as
tic-tac-toe and the eight puzzle, and for R1-SOAR (a reformulation of
a portion of the R1 expert system in SOAR).

This work has been pursued in collaboration with John Laird (Xerox
PARC) and Allen Newell (Carnegie-Mellon University).

------------------------------

Date: 24 Sep 1984 18:13-EDT
From: ABN.CJMERRICK@USC-ISID.ARPA
Subject: Conference - Knowledge-Based Command and Control


                 SYMPOSIUM & EXHIBITION ON "ARTIFICIAL
                      INTELLIGENCE" TO BE HELD IN
                         KANSAS CITY, MISSOURI


                 "THE ROLE OF KNOWLEDGE BASED SYSTEMS
                         IN COMMAND & CONTROL"

                             SPONSORED BY:
                     KANSAS CITY CHAPTER OF AFCEA

                          OCTOBER 17-19, 1984


     The Kansas City Chapter of the Armed Forces Communications and
Electronics Association is proud to announce that it is sponsoring
its Second Annual Symposium and Exhibition to discuss the applicability
of artificial intelligence and knowledge based systems to command and
control requirements, in both the military and commercial environments.
     The Symposium will be enhanced by the presence of hardware and
software exhibits, representing advances in technology related to the
theme.
     Highlights of the Symposium will include noted individuals such
as Dr. Joseph V. Braddock of the BDM Corporation addressing user
perspectives of utilizing knowledge based systems to fulfill command
and  control needs.  Dr. Robert W. Milne of the Air Force Institute
of Technology will address AI technology and its application to
command and control.
     A luncheon presentation will be given by Lieutenant General
Carl E. Vuono, Commander, Combined Arms Center, Fort Leavenworth
and Deputy Commander, Training and Doctrine Command.
     General Donn A. Starry (Ret), Vice President and General Manager,
Space Missions Group of Ford Aerospace and Communications Corporation
will be the guest speaker following the evening meal on Thursday.
     The Symposium and Exhibition will be held over a three-day
period commencing with an opening of the exhibit area and a cocktail
and hors d'oeuvres social on October 17, 1984.  Technical sessions
will begin at 8:00 a.m. on October 18.  The format of the technical
presentation will consist of two high intensity panel discussions,
a session in which pertinent papers will be presented and two guest
lectures.

                          ABBREVIATED AGENDA

     WEDNESDAY, 17 OCTOBER 1984

1200-1700     Check in & Registration
1700-1900     Welcome Social & Exhibits Open

     THURSDAY, 18 0CTOBER 1984

0800-1145     SESSION I - Panel Discussion:  "Status and Forecast of
              of AI Technology as it applies to Command and Control"
              Panel Moderator:
                   Mr. Herbert S. Hovey, Jr.
                   Director, U.S. Army Signals Warfare Laboratory
                   Vint Hill Farms Station
                   Warrenton, Virginia  22186

1145-1330     Luncheon/Guest Speaker:
                   Lieutenant General Carl E. Vuono
                   Commander, U.S. Army Combined Arms Center
                   Deputy Commander, Training and Doctrine Command
                   Fort Leavenworth, Kansas  66207

1330-1700     SESSION II - Presentation of Papers

1700-1830     Social Hour

1830-2030     Dinner/Evening Speaker:
                   General Donn A. Starry (Ret)
                   Vice President & General Manager
                   Space Missions Group of Ford Aerospace and
                   Communications Corporation

     FRIDAY, 19 OCTOBER 1984

0800-1200     SESSION III - Panel Discussion:  "User Perspectives of
              Pros and Cons of Knowledge Based Systems in Command and
              Control"
              Panel Moderator:
                   Brigadier General David M. Maddox
                   Commander, Combined Arms Operations Research Activity
                   Fort Leavenworth, Kansas  66027


To make reservations or for further information, write or call:

                   AFCEA SYMPOSIUM COMMITTEE
                   P.O. Box 456
                   Leavenworth, Kansas  66048
                   (913) 651-7800/AUTOVON 552-4721


                   MILITARY POC IS:

                   CPT (P) CHRIS MERRICK
                   CACDA, C3I DIRECTORATE
                   FORT LEAVENWORTH, KANSAS 66027-5300
                   AUTOVON:  552-4980/5338
                   COMMERCIAL:  (913) 684-4980/5338
                   ARPANET:  ABN.CJMERRICK

------------------------------

End of AIList Digest
********************

∂03-Oct-84  1218	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #130    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Oct 84  12:18:22 PDT
Date: Wed  3 Oct 1984 10:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #130
To: AIList@SRI-AI


AIList Digest           Wednesday, 3 Oct 1984     Volume 2 : Issue 130

Today's Topics:
  Games - Chess Program,
  Pattern Recognition - Minimal Spanning Trees,
  Books - Tim Johnson's Report,
  Academia - Top Graduate Programs,
  AI Tools - OPS5 & Windows,
  Games - Computer Chess Tournament & Delphi Game
----------------------------------------------------------------------

Date: Tue, 2 Oct 84 21:46:14 EDT
From: "David J. Littleboy" <Littleboy@YALE.ARPA>
Subject: Chess Request

I  would  like  to acquire a state of the art chess program, preferably better
than USCF 1500, to run on  a  68000  based  machine  (an  Apollo).   Something
written  in  any  of the usual languages (C, Pascal) would probably be useful.
Since I intend to use it  as  an  opponent  for  the  learning  program  I  am
building,  I would also like the sources.  I am, of course, willing to pay for
the program.  Any pointers would be greatly appreciated.  Alternatively,  does
anyone know of a commercial chess machine with an RS-232 port?

                                          Thanks much,
                                          David J. Littleboy
                                          Littleboy@Yale
                                          ...!decvax!yale!littleboy

By  the  way,  the  basic  theoretical claim I start from is that the "problem
space" a chess player functions in is determined not so much by  the  position
at  hand,  as by the set of ideas, plans, and experiences he brings to bear on
that position.  Thus I view chess as a planning activity, with the goals to be
planned for deriving from a player's experiences in similar positions.

------------------------------

Date: 2 Oct 1984 11:25-cst
From: "George R. Cross" <cross%lsu.csnet@csnet-relay.arpa>
Subject: MST distributions

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

I am interested in references to the following problem:

Suppose we have n-points uniformly distributed in a subset S contained in
p-dimensional Euclidean space R↑p:

1.What is the distribution of the largest length of the Minimum
Spanning Tree (MST) over the n-points?  Assume Euclidean distance is
used to define the edge weights.

2.What is the distribution of the length of edges in the MST?

3.What is the distribution of the size of the maximal clique?

Asymptotic results or expected values of these quantities would be
interesting also.  We expect to make use of this information in
cluster algorithms.

Thanks,
        George Cross
        Computer Science
        Louisiana State University

        CSNET: cross%lsu@csnet-relay

------------------------------

Date: Tue 2 Oct 84 09:54:13-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Tim Johnson's Report

The Commercial Application of Expert Systems Technology by Tim Johnson is
a 1984 publication from Ovum Ltd., 14 Penn Road, London N7 9RD, England.
It is also available from IPI, 164 Pecora Way, Portola Valley, Ca. 94025
and sells for $395.  The report is 382 pages and primarily covers expert
systems research in the USA and UK although it also describes some of the
larger research projects worlwide.

Harry Llull, Stanford University Math/CS Library

------------------------------

Date: 29 Sep 84 20:19:50-PDT (Sat)
From: decvax!ittvax!dcdwest!sdcsvax!daryoush @ Ucb-Vax.arpa
Subject: Re: Top Ten
Article-I.D.: sdcsvax.149

Stanford is defintely one of the 3 best, if not THE best.

--id

------------------------------

Date: 3 Oct 84 11:41:55 EDT
From: BIESEL@RUTGERS.ARPA
Subject: OPS5 info summary.

Thanks are due to all the folks who responded to my request for information
on OPS5. What follows is a summary of this information.

There are at least three version of OPS5 currently available:

1) DEC Compiler QA668-CM in BLISS, available to 2 and 4 year degree granting
institutions for $1000. Documentation:
        AA-GH00A-TE  Forgy's Guide
        AA-BH99A-TE  DEC's User Guide

2)Forgy's version (Charles.Forgy@CMU-CS-A), running under Franz Lisp on
VAXen. A manual is also available from the same source.

3)A T Lisp version created by Dan Neiman and John Martin at ITT
(decvax!ittvax!wxlvax!martin@Berkeley). This version is also supported by
some software tools, but cannot be given away. For costs and procedures
contact John Martin.

Short courses on OPS5 are available from:
        Smart System Technology
        6870 Elm Street
        McLean, VA 22101
        (703) 448-8562

Elaine Kant and Lee Brownston@CMU-CS-A, Robert Farrell@Yale and Nancy Martin
at Wang Labs are writing a book on OPS5, to be published this Spring by
Addison←Wesley.

        Regards,
                Pete

------------------------------

Date: Mon 1 Oct 84 14:35:13-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Summary of Window Responses

I got several replies to my question about the relation between windows
and expert systems.  The consensus seemed to be that since an expert
system development environment is like a programming environment, and
since PEs are known to benefit from having multiple windows available,
windows are an important part of expert system tools.  Incidentally, the
issue of graphics is orthogonal - graphics is useful in a great number
of applications (try describing the weirder geologic formations in words!),
although perhaps not all.

I have a little trouble with both assumptions.  I looked in my nifty
collection of reprints, "Interactive Programming Environments" (Barstow,
Shrobe, and Sandewall, eds., pub. by McGraw-Hill),
and found no research supporting the second assertion.  Its main
support appeared to be anecdotal.  My own anecdotal experience
is that even experienced users spend an inordinate amount of clock
time trying to do something right, but are not aware of just how
much time they're taking (pick a menu item, oops!, undo, try again,
then search all over the screen for 5 chars of text, then go through
an elaborate sequence of ops to grab those chars, paste them in the
wrong place when your mouse hand jiggles, delete, and try again, etc).
It's interesting to note that Winograd's two papers (from 1974 and 1979)
talk about all kinds of things that a PE should have, but with no mention
of graphics anywhere.

The first assertion appears to be true, and is a sad comment on the
sophistication of today's expert system tools.  If expert system
environments are just PEs, why not just supply PEs?  What's the
important difference between a Lisp stack backtrace and a rule
system backtrace?  Why can't today's expert system tools at least
provide a TMS and some detailed explanation facilities?  Why
hasn't anybody included some meta-level knowledge about the tool
itself, as opposed to supplying an inscrutable block of code and
a (possibly correct) user's manual?  I don't understand.  It seems
as though the programming mentality reigns supreme (if you don't
understand that remark, go back and carefully reread Winograd's
1979 paper "Beyond Programming Languages" (in CACM, and reprinted
in the abovementioned book).

                                                        stan shebs

------------------------------

Date: Tue Oct  2 12:24:29 1984
From: mclure@sri-prism
Subject: reminder of upcoming computer chess tournament in San
         Francisco

    This is a reminder that this coming Sunday (Oct 7) will herald the
beginning of the battle of the titans at the San Francisco Hilton
"continental parlors" room at 1pm.

    Cray Blitz the reigning world champion program will attempt to
squash the vengeful Belle.  Nuchess, a perennial "top-finishing
contender" and descendent of Chess 4.5, wants a piece of the action and
would be very happy to see the Belle/Cray Blitz battle cause both to go
up in a puff of greasy, black smoke, leaving Nuchess as the top dog for
the entire year.

    It promises to be as interesting as it is every year.  You don't
have to be a computer-freak or chess-fanatic to enjoy the event.

    Come on by for a rip-roaring time.

        Stuart

------------------------------

Date: Sun Sep 30 16:02:03 1984
From: mclure@sri-prism
Subject: Delphi 15: cruncher nudges bishop

The Vote Tally
--------------
The winner is: 14 ... Ne8
There were 16 votes. We had a wide mixture. The group seemed to have
difficulty forming a plan. Many different plans were suggested.

The Machine Moves
-----------------
        Depth   Move    Time for search         Nodes      Machine's Estimate
        8 ply   h3       6 hrs, 4 mins         2.18x10↑     +4% of a pawn
                (P-KR3)

                Humans                    Move        # Votes
        BR ** -- BQ BN BR BK **       14 ... Ne8        4
        ** BP ** -- BB BP BP BP       14 ... Rc8        3
        BP ** -- BP -- ** -- **       14 ... Nh5        3
        ** -- ** WP BP -- ** --       14 ... Nd7        2
        -- ** -- ** WP ** BB **       14 ... Qd7        2
        ** -- WN -- WB WN ** WP       14 ... Nxe4       1
        WP WP -- ** WQ WP WP **       14 ... Qb6        1
        WR -- ** -- WR -- WK --
             Prestige 8-ply

The machine's evaluation turned from negative to slightly positive.
Apparently it likes this position somewhat but still considers the
position even.

The Game So Far
---------------
1. e4  (P-K4)   c5 (P-QB4)  11. Be2 (B-K2)  Nxe2 (NxB)
2. Nf3 (N-KB3)  d6 (P-Q3)   12. Qxe2 (QxN)  Be7 (B-K2)
3. Bb5+(B-N5ch) Nc6 (N-QB3) 13. Nc3 (N-QB3) O-O (O-O)
4. o-o (O-O)    Bd7 (B-Q2)  14. Be3 (B-K3)  Ne8 (N-K1)
5. c3 (P-QB3)   Nf6 (N-KB3) 15. h3 (P-KR3)
6. Re1 (R-K1)   a6 (P-QR3)
7. Bf1 (B-KB1)  e5 (P-K4)
8. d4  (P-Q4)   cxd4 (PXP)
9. cxd4 (PXP)   Bg4 (B-N5)
10. d5  (P-Q5)  Nd4 (N-Q5)

Commentary
----------
    BLEE.ES@XEROX
        14  ...  Ne8 as
        14  ...  Nh5?; 15. h3 B:f3 (if 15 ... Bd7?; 16. N:e5
        and white wins a pawn) 16. Q:f3 Nf6 (now we've lost
        the bishop pair, a tempo and the knight still blockades
        the f pawn and the white queen is active...)
        (if 16 ... g6?; 16. Bh6 Ng7; 17. g4 and black can't support f5 because
        the light square bishop is gone) while
        14 ... Nd7?; 15. h3 Bh5; 16. g4 Bg6; and black has trouble supporting
        f5. I expect play to proceed:
        15. h3    Bd7
        16. g4    g6
        17. Bh6   Ng7
        18. Qd3   f5 (at last!)
        19. g:f5  g:f5

    JPERRY@SRI-KL
        In keeping with the obvious strategic plan of f5, I
        vote for 14...N-K1.  N-Q2 looks plausible but I would
        rather reserve that square for another piece.

    SMILE@UT-SALLY
        14 ... Nh5.
        Paves the way for f5. Other possibility is Qd7 first. Either
        way I believe f5 is the key (as it often is!).

    REM@MIT-MC
        I'm not much for attacking correctly, so let's prepare
        to double rooks: 14.  ...  Q-Q2 (Qd7) (It also helps a
        K-side attack if somebody else can work out the details.)

    VANGELDER@SU-SCORE
        14. ... Nxe4 (vote)
        In spite of what the master says, White can indefinitely prevent f5 by
        h3, Bd7, g4.  Will the computer find this after Ne8 by Black?
        Stronger over the board is 14 ... Nxe4.  If 15. Nxe4 f5 16. N/4g5 f4
        and Black regains the piece with advantage.  The
        majority will probably not select this move, which may
        be just as well, as attack-by-committee could present
        some real problems.  Nevertheless, the computer
        presumably saw and examined several ply on this line and
        it would be interesting to see what it thinks White's
        best defense is.  An alternate line for White is 15.
        Nxe4 f5 16.  N/4d2 e4 17.  h3 Bh5 18.  Bd4 Bg4!?  19.
        Nxe4 fxe4 20.  Qxe4 Bxf3 21.  gxf3 Rf4.
        There are many variations, but most are not decisive in
        8 ply, so the computer's evaluation function would be
        put to the acid test.

    ACHEN.PA@XEROX
        13 ... Nh5 (keep up the pressure)
        this might provoke 14 g3 Bd7, either 15 Nd2 or h4 to
        start a counter attack.  the black is hoping to exchange
        the remaining knight with queen's bishop 16 ...  Nf4
        then maybe attempt to encircle the white with Qb6
        attacking the weakside behind the pawns.  (note: if 13
        ...  Nh5 can't 14 ...  f5 for the obvious reason)

Solicitation
------------
    Your move, please?

        Replies to Arpanet: mclure@sri-prism, mclure@sri-unix or
        Usenet: ucbvax!menlo70!sri-unix!sri-prism!mclure

------------------------------

End of AIList Digest
********************

∂06-Oct-84  1720	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #131    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Oct 84  17:14:31 PDT
Date: Fri  5 Oct 1984 09:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #131
To: AIList@SRI-AI


AIList Digest             Friday, 5 Oct 1984      Volume 2 : Issue 131

Today's Topics:
  Linguistics - Sastric Sanskrit & LOGLAN & Interlinquas
----------------------------------------------------------------------

Date: Wed, 3 Oct 1984  23:55 PDT
From: KIPARSKY@SU-CSLI.ARPA
Subject: Sanskrit has ambiguity and syntax

Contrary to what Briggs claims, Shastric Sanskrit the same kinds of
ambiguities as other natural languages. In particular, the language
allows, and the texts abundantly exemplify: (1) anaphoric pronouns
with more than one possible antecedent, (2) ambigous scope of
quantifiers and negation, (3) ellipses, (4) lexical homonymy, (5)
morphological syncretism.  Even the special regimented language in
which Panini's grammar of Sanskrit is formalized (not a natural
language though based on Sanskrit) falls short of complete unambiguity
(see Kiparsky, Panini as a Variationist, MIT Press 1979).  The claim
that Sanskrit has no syntax is also untrue, even if syntax is
understood to mean just word order: rajna bhikshuna bhavitavyam would
normally mean "the beggar will have to become king", bhikshuna rajna
bhavitavyam "the king will have to become a beggar" --- but in any
case, there is a lot more to syntax than word order.

------------------------------

Date: Wed, 3 Oct 84 01:23:07 PDT
From: "Dr. Michael G. Dyer" <dyer@UCLA-LOCUS.ARPA>
Subject: Sastric Sanskrit


Re: Rick Briggs' comments on a version of Sastric Sanskrit.

Well,  I AM incredulous!  Imagine.  The entire natural language
processing problem in AI has already been solved!  and a millenium ago!
All we need to do now is publish a 'manual' of this language and
our representational problem in NLP is over!  Since this language
can say anything you want,  and "mean exactly what you say"  and
"with no effort",  and since it is unambiguous,  it sounds like
my problems as an NLP researcher are over.

I DO have a few minor concerns (still).  The comment that
there are no translations,  and that it takes sanskrit scholars
a "very long time"  to figure out what it says,  makes it sound to
me like maybe there's some complex interpretations going on.
Does this mean that a 'parser' of some sort is still needed?

Also,  I'd greatly appreciate a clearer reference to the book (?)
mentioned.  Who is the publisher?  Is it in English?  What year
was it published?  How can we get a copy?

Another problem:  since this language has an "extensive literature" does
that include poetry?  novels?  Are the poems unambiguous?  are there
plays on words?  metaphor?  (Can you say the equivalent of "Religion is
the opiate of the masses"?  and if not, it that natural?  if not, then
how are analogical mappings formed?) satire?  humor?  puns?
exaggeration?  fantasy?  does the language look like a bunch of horn
clauses?  (most of the phenomena in the list above involve AMBIGUITY of
context, beliefs, word senses, connotations, etc. How does the
literature avoid these features and remain literature?)

Finally,  Yale researchers have been arguing that representational
systems for story understanding requires explict conceptual structures
making use of scripts, plans, goals,  etc.  Do such constructs
(e.g. scripts) exist explicity  in the language?

does its literature make use of idioms?
e.g. "John drove Mary [home]"  vs
     "John drove Mary [to drink]"

Also,  why is English "worse" than other languages?  Chinese has
little syntax and it's ambiguous.  Latin has very free word order
with prefixes and suffixes and it's ambiguous.  Both rely heavily on
context and implicit world knowledge.  Early work by Schank
included representing a Mayan dialect (i.e. Quiche') in Conceptual
Dependency.  Quiche seems to have features standard to other natural
languages,  so how is English worse?

In the book "Reader over Your Shoulder", Graves & Hodge  have a humorous
piece about some town councilmen trying to write a leash law.
No matter how they state it,  unhappy assumptions pop up.
e.g.  "No dogs in the park without a leash"  seems to be addressed
to the dogs.  "People must take their dogs into the park on a leash"
seems to FORCE people to drag there dogs into the park (and at what hour?)
even if they don't want to do so. etc etc

what about reference?  does sastric sanskrit have pronouns?
what about IT?  does IT have THEM? etc  if so,  how does it avoid
ambiguous references?  how many different types of pronouns does it
have (if any)?

Let's have some specific examples.  E.g. does it have the equivalent of
the word "like"?  Before you answer "yes",  there's a difference
between "John likes newsweek"  and "John likes chocolate"

In one case we want our computer to infer that John likes to "eat"
chocolate  (not read it)  and in the other case that he likes to
read newsweek (not eat it).    Sure,  I COULD have said
"John likes to eat chocolate" specifically.  but I can abbreviate
that simply to "x likes <object>"  and let the intelligent listener
figure out what I mean.   When I say "John likes to eat chocolate"
do I mean he enjoys the activity of eating,  or that he feels
better after he's eaten?   When I say "John likes to eat
chocolate but feels terrible afterwards"  I used the word "but"
because I know it violated a standard inference on the part of the
listener.  Natural languages are "expectation-based".  Does this
ancient language require the speaker to explicitly state all
inferences & expectations?

Like I said already,  if this ancient language really does what
is claimed,  then we should all dump the puny representational
systems we've been trying to invent and extend over the last
decade and adopt this ancient language as our final say
on semantics.

Recent work by Layman Allen (1st Law & Technology conference)
in normalizing American law shows that the logical connectives
used by lawyers are horribly ambiguous.  Lawyers use
content semantics to avoid noticing these logical ambiguities.
Does this brand of sanskrit have a text of ancient law?  What
connectives did they use?  Maybe the legal normalization problem
has also already been solved.

Did they have a dictionary?  If so, can we see some of the entries?  How
do the dictionary entries combine?  No syntax AT ALL?  Loglan adds
suffixes onto everything and it's plenty awkward.  It has people who
write poems in it and other "literature" but you can probably pack all
loglanners who "generate" loglanese into a single phone booth.

Just how many ancient scholars spoke this sanskrit?

I look forward to more discussion on this incredible language.

--  A still open-minded but somewhat skeptical inquirer

------------------------------

Date: Thursday,  4-Oct-84 23:59:06-BST
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.%edxa@ucl-cs.arpa>
Subject: An Unambiguous Natural Language?


     There was a recent claim in this digest that a "branch of Sastric
Sanskrit" was an unambiguous natural language.  There are a number of
points I'd like to raise:

(a)  If there are no translated texts, and if it takes a very long
     time for an expert in "ordinary" Sanskrit to read untranslated
     texts, it seems more than likely that the appearance of being
     free from ambiguity is an illusion due to our ignorance.

(b)  Thanks for the reference.  But judging by the title you need to
     know a lot more about Indian languages to read it than most of
     the readers of this digest, and without knowing the publisher one
     would have to be thoroughly at home with the literature to even
     find it.

(c)  It's news to me that Sanskrit wasn't an Indo-European language.
     The Greek-English dictionary I have a copy of keeps pointing to
     Sanskrit roots as if the two languages were related, but what do
     they know?  If Sastric Sanskrit is an Indo-European language, it
     is astonishing that it alone is unambiguous.  It's especially
     astonishing when the one non-Indo-European language of which I
     have even the sketchiest acquaintance (Maaori) isn't unambiguous
     either and when no-one seems to be claiming that Japanese or
     Chinese or any other common living language is unambiguous.

(d)  Dead languages are peculiarly subject to claims of perfection.
     Without a living informant, we cannot tell whether our failure to
     discover another reading means there isn't one or whether it just
     means that we're ignorant of a word sense.  I suppose this is
     point (a) again.

(e)  If a language permits metaphor, it is ambiguous.  The word for
     "see" in ordinary Sanskrit is something like "oide", and I'm told
     that it can mean "understand" as well as "perceive with the eye".
     Do we KNOW that the Sastric Sanskrit words for "see", "grasp",
     and so on were NEVER employed with this meaning?

(f)  We're actually dealing with an ambiguous term here: "ambiguous".
     The following definition is the only one I can think of which is
     not dependent on some "expert's" arbitrary choice:
        a sentence S in a text is ambiguous if
        taking into account assumed common knowledge and the
        context supplied by the rest of the text
        there is some natural language L such that
        S has at least two incompatible translations in L.
     Here's an example: there are four people in a room, A, B, C, D.
     This is the beginning of the text, and nothing else in the text
     lets us judge these points, and we've never heard of A,B,C,D
     before.  A says to D: "we came from X."
     I assume we know exactly what place X is.  Now, does A mean that
        A,B,C and D all came from X?  (reminding D)
        A,B,C came from X?
        A and D came from X? (he knows B and C are listening)
        A and one of B and C came from X?
     We need to distinguish between dual and plural number, and
     between inclusive first person and exclusive first person.  If
     the language L marks the gender of plural subjects, we may need
     to know in the case of A and (B or C but not both) which of B
     and C was intended.  Now consider A mentioning to D "that table",
     assuming that there are several tables in the same room, all of
     the same sort.  We need to know whether the table he is indicating
     is near D (it can't be near A or he'd say "this table") or whether
     it is distant from both A and D.  Does the branch of Sanskrit in
     question make all these distinctions?  Can every tense in it be
     translated to a unique English tense?  Does it have no broad
     colour terms such as the "grue" present in several languages?
     Failing that, by what criterion IS it unambiguous?
     {What's a better definition of ambiguity?  This one strikes
     most people I've offered it to as too strong.}

(g)  Absence of syntax is no guarantee of unambiguity.  Consider the
     phrase "blackbird".  It doesn't matter how we indicate that
     black modifies bird, the source of ambiguity is that we don't
     know whether the referent is some generic bird that happens to
     be black (a crow, say), or whether this phrase is used as the
     name of a species.  In English you can tell the difference by
     prosody, but that doesn't work to well with long-dead languages,
     and if you thought it always meant turdus merula you might never
     find anything in the fixed stock of surviving texts to reveal
     the mistake.

(h)  What evidence is there that this language was spoken?  Note that
     if a text in this language quotes someone as speaking in it,
     that still isn't evidence that the language was spoken.  I've
     just been reading a book set in Greece, with Greek characters,
     but the whole thing was in English...  Are there historians
     writing in other languages who say that the language was spoken?

(i)  There is another ambiguous term: "natural" language.  Is Esperanto
     a natural language?  Is Shelta?  The pandits were nobody's fools,
     after all, Panini invented Backus-Naur form for the express
     purpose of describing Sanskrit, and I am not so contemptuous of
     the ancient Indians as to say that they couldn't do a better job
     of designing an artificial language than Zamenhof did.

I'm not saying the language isn't unambiguous, just that it's such a
startling claim that I'll need more evidence before I believe it.

------------------------------

Date: 3 Oct 84 12:57:24-PDT (Wed)
From: hplabs!sdcrdcf!sdcsvax!sdamos!elman @ Ucb-Vax.arpa
Subject: Re: Sanskrit
Article-I.D.: sdamos.17

Rick,

I am very skeptical about your claims that Sastric Sanskrit is an
unambiguous language.  I also  feel you misunderstand the nature
and consequences of ambiguity in natural human language.

    |        The language is a branch of Sastric Sanskrit which flourished
    |between the 4th century B.C and 4th century A.D., although its
    |beginnings are somewhat older.  That it is unambiguous is without
    |question.

Your judgment is probably based on written sources.  The sources may also
be technical texts.  All this indicates is that it was possible to write
in Sastric Sanskrit with a minimum of ambiguity.  So what?   Most languages
allow utterances which have no ambiguity.  Read a mathematics text.

    |The problem is that most (maybe all) of us are used
    |to languages like English (one of the worst) or other languages which
    |are so poor as vehicles of transmission of logical data.

I think you have fallen victim to the trap of the egocentrism.  English is
not particularly less (or more) effective than other languages as a vehicle
for communicating logical data, although it may seem that way to
a native monolingual speaker.

    |        The facility and ease with which these Indians communicated
    |indicates that it is possible for a natural language to serve all
    |purposes of artificial languages based on logic.

How do you know how easily they communicated?   I'm serious.  And
how easily do you read a text on partial differential equations?  An
utterance which is structurally ambiguous may not be the easiest to
read.

    |If one could say what one wishes to say with absolute clarity (although
    |with apparent redundancy) in the same time and with the same ease as
    |you say part of what you mean in English, why not do so?  And if a
    |population actually got used to talking in this way there would be
    |much more clarity and less confusion in our communication.

Here we come to an important point.  You assume that the ambiguity of
natural languages results in loss of clarity.  I would argue that
in most cases the structural ambiguity in utterances is resolved
by other (linguistic or paralinguistic) means.  Meaning is determined
by a complex interaction of factors, of which surface structure is but one.
Surface ambiguity gives the language a flexibility of expression.  That
flexibility does not necessarily entail lack of clarity.  Automatic
(machine-based) parsers, on the other hand, have a very difficult time
taking all the necessary interactions into account and so must rely more
heavily on a reliable mapping of surface to base structure.

    |        As to how this is accomplished, basically SYNTAX IS ELIMINATED.
    |Word order is unimportant, speaking is thus comparable to adding a
    |series of facts to a data-base.

Oops!  Languages may have (relatively) free word order and still have
syntax.   A language without syntax would be the linguistic find of
the century!

In any event, the principal point I would like to make is that structural
ambiguity is not particularly bad nor incompatible with "logical" expression.
Human speech recognizers have a variety of means for dealing with
ambiguity.  In fact, my guess is we do better at understanding languages
which use ambiguity than languages which exclude it.

Jeff Elman
Phonetics Lab, Dept. of Linguistics, C-008
Univ. of Calif., San Diego La Jolla, CA 92093
(619) 452-2536,  (619) 452-3600

UUCP:      ...ucbvax!sdcsvax!sdamos!elman
ARPAnet:   elman@nprdc.ARPA

------------------------------

Date: Friday,  5 Oct 1984 10:15-EDT
From: jmg@Mitre-Bedford
Subject: Loglan, properties of interlinguas, and NLs as interlinguas

        There has been a running conversation regarding the use of an
intermediate language or interlingua to facilitate communication between
man and machine.  The discussion lately has focused on whether or not it
is possible or even desirable for a natural language (i.e., one which was
made for and spoken/written by humans in some historical and cultural
context) to serve in this role.  At last glance it would seem to be a
standoff between the cans and cannots.  It might be interesting to see
if a consensus can at least be reached regarding what an interlingua
might be like and therefore whether any natural languages or formal ones
for that matter would fit or could be made to fit the necessary form.
        It would seem that a candidate language would possess a fair
sample of the following characteristics (feel free to add to or modify
this list):
        1) small number of grammar rules--to reduce the trauma of learninng
a new language, simplify parsing program, and generally speed up the works
        2) small number of speech sounds--to ease learning, and, if well
chosen, improve the distinction between sounds and thus the apprehensibil-
ity of the spoken language
        3) phonologically consistent--for similar reasons as 2) above
        4) relative freedom from syntactic ambiguity--to ease translation
activities and provide an experimental tool for exploring ambiguity in
NLs and thought
        5) graphologically regular/consistent with phonology--to ease the
transition to the interlingua by introducing no new characters and only
simple spelling rules
        6) simple morphology--to improve the recognizability of words and
word types by limiting the structures of legal words to a few and making
word construction regular
        7) resolvability--to aid in machine and human information extraction,
particularly in noisy environments, by combining  well-chosen phonology and
morphology
        8) freedom from cultural or metaphysical bias--to avoid introducing
unintended effects due to specific built-in assumptions about the universe
that may be contained within the language
        9) logical clarity--to ensure the ability to construct the classical
logical connections important to semantically and linguistically useful
expressions
       10) wealth of metaphor--to allow this linguistic feature to be studied
and provide a creative tool for expression

        These features were selected to try to characterize the intent of
a hypothetical designer of an interlingua.  Possibly no product could fully
merge all the features without compromising unacceptably some of the desir-
able traits.  If this list appears unacceptable, make suggestions and/or
additions and deletions until a workable list results.
        It is likely that no current or historical natural language would
combine a sufficient number of the above features to stand out as an obvious
choice to use as interlingua.  Simplicity, regularity, ease of learning,
ease of information extraction, lack of syntactic ambiguity, and the rest
are the earmarks of a constructed language.  It remains to be seen that a
so-constructed language can be used by humans to express  unrestrictedly the
full range of human thought.
        In response to Dr. Dyer's comment about loglan, I can testify that it
is not all that hard to get around in.  It is a "foreign" language, however,
and thus takes some learning and getting used to.  It does have several of
the features that an interlingua would.  Only experience will ultimately
reveal whether it is "natural" enough to be useful for exploring the rela-
tionship between thought and language and formal enough to be machine-
realizable.

                            -Michael Gilmer
                            jmg@MITRE-BEDFORD.ARPA

------------------------------

End of AIList Digest
********************

∂07-Oct-84  1054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #132    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Oct 84  10:54:42 PDT
Date: Fri  5 Oct 1984 10:19-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #132
To: AIList@SRI-AI


AIList Digest            Saturday, 6 Oct 1984     Volume 2 : Issue 132

Today's Topics:
  Bindings - Query about D. Hatfield,
  Applications - AI and Business,
  AI Literature - List of Sources,
  Academia - Top Graduate Programs,
  Conference - Fifth Generation at ACM 84,
  AI Tools - OPS5 & YAPS & Window Systems & M.1,
  Scientific Method - Induction,
  Seminar - Natural Language Structure
----------------------------------------------------------------------

Date: Wed, 3 Oct 1984  15:54 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: Query about D. Hatfield

      Wed., Aug. 29 Computer Science Seminar at IBM-SJ
      10:00 A.M.  WYSIWYG PROGRAMMING
                D. Hatfield, IBM Cambridge Scientific Center
                Host:  D. Chamberlin


This message appeared some time ago.  [Can someone provide]
any pointers to the speaker, D. Hatfield?  Does he have any papers
on the same subject?  Thanks.

Fanya Montalvo, MIT, AI Lab.

------------------------------

Date: 3 Oct 84 8:39:05-PDT (Wed)
From: hplabs!sdcrdcf!sdcsvax!noscvax!bloomber @ Ucb-Vax.arpa
Subject: Re: AI for Business
Article-I.D.: noscvax.641

  I would also be interested in pointers to books or articles that
emphasize the business (preferably practical) uses of AI.


                                        Thanks ... Mike
--

Real Life: Michael Bloomberg
   MILNET: bloomber@nosc
     UUCP: [ihnp4,akgua,decvax,dcdwest,ucbvax]!sdcsvax!noscvax!bloomber

------------------------------

Date: Wed, 3 Oct 84 00:05 CDT
From: Jerry Bakin <Bakin@HI-MULTICS.ARPA>
Subject: Keeping up with AI research

I am interested in following trends and research in AI.  What do active
AI'ers feel are the important journals, organizations and conferences?

Thanks,

Jerry Bakin -- Bakin@HI-Multics


[I have sent Jerry the list of journals and conferences compiled by
Larry Cipriani and published in AIList V1 N43.  In short,

    AI Magazine
    AISB Newsletter
    Annual Review in Automatic Programming
    Artificial Intelligence
    Behavioral and Brain Sciences
    Brain and Cognition
    Brain and Language
    Cognition
    Cognition and Brain Theory
    Cognitive Psychology
    Cognitive Science
    Communications of the ACM
    Computational Linguistics
    Computational Linguistics and Computer Languages
    Computer Vision, Graphics, and Image Processing
    Computing Reviews
    Human Intelligence
    IEEE Computer
    IEEE Transactions on Pattern Analysis and Machine Intelligence
    Intelligence
    International Journal of Man Machine Studies
    Journal of the ACM
    Journal of the Assn. for the Study of Perception
    New Generation Computing
    Pattern Recognition
    Robotics Age
    Robotics Today
    SIGART Newsletter
    Speech Technology

    IJCAI   International Joint Conference on AI
    AAAI    American Association for Artificial Intelligence
    TINLAP  Theoretical Issues in Natural Language Processing
    ACL     Association of Computational Linguistics
    AIM     AI in Medicine
    MLW     Machine Learning Workshop
    CVPR    Computer Vision and Pattern Recognition (formerly PRIP)
    PR      Pattern Recognition (also called ICPR)
    IUW     Image Understanding Workshop (DARPA)
    T&A     Trends and Applications (IEEE, NBS)
    DADCM   Workshop on Data Abstraction, Databases, and Conceptual Modeling
    CogSci  Cognitive Science Society
    EAIC    European AI Conference

Would anyone care to add a list of organizations?  -- KIL]

------------------------------

Date: Wed, 3 Oct 84 13:31:08 
From: Bob Woodham <woodham%ubc.csnet@csnet-relay.arpa>
Subject: Top Graduate Programs

I cannot resist offering my contribution but first three comments:

 1. A strict linear ordering is rather meaningless so I've simply listed
    schools alphabetically within two broad categories.
 2. Not surprisingly, given my location, I've expanded things to
    all of North America.  There are good programs outside the continent
    but I'm not qualified to comment.
 3. If your favourite school is missing, let that indicate my ignorance
    rather than a slight.  Since this is roughly the advice I give our own
    students, I'd like to hear more.

Category I:   Major Strength in all Areas of AI (alphabetic order)

CMU, MIT, Stanford

Category II:  Major Strength in at least one Area of AI, adequate overall
              (alphabetic order)

Illinois, McGill, Penn, Rochester, Rutgers, Texas (at Austin), Toronto,
UBC, Yale

There are other schools with strengths, or emerging strengths, that are
worth considering.  Thankfully, I'm already beyond the requested number
of ten.  Any of the above schools could be an excellent choice, depending
on the particular area of interest.

------------------------------

Date: 3 Oct 1984 14:24-PDT
From: scacchi%usc-cse.csnet@csnet-relay.arpa
Subject: ACM 84

Just a short note to point out that at the 1984 ACM Conference in San
Francisco has a number of sessions on AI and Fifth Generation
technologies. In particular, there are at least three sessions that
focus on the broader social consequences that might arise from
the widespread adoption and use of AI systems. The three sessions
include:

1. "The Workplace Impacts of Fifth Generation Computing -- AI and Office
Automation" on tuesday (9 Oct 84) morning

2. "Social and Organizational Consequences of New Generation Technology"
on tuesday afternoon.

3. "Social Implications of Artificial Intelligence" on wednesday
afternoon.

If you are able to attend the ACM 84 conference and you are interested
in discussing or learning about social analyses of AI technology
development, then you should try to attend these sessions.

-Walt-

(Scacchi@Usc-cse via CSnet)

------------------------------

Date: 2 Oct 84 16:03:48-PDT (Tue)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: obtaining OPS-5
Article-I.D.: wdl1.458

     OPS-5 is obtained from Charles Forgy at CMU, reached at the following
address.  Do not contact me regarding this.

Forgy, Charles L. (CLF)                              CHARLES.FORGY@CMU-CS-A
   Carnegie-Mellon University
   Computer Science Department
   Schenley Park
   Pittsburgh, Pennsylvania 15213
   Phone: (412) 578-3612

------------------------------

Date: Wed, 3 Oct 84 23:39:58 edt
From: mark@tove (Mark Weiser)
Subject: ops5 and yaps.

For those of you interested in ops5, don't forget YAPS.  Yaps was
described by Liz Allen of Maryland at the '83 AAAI.

Yaps, yet another production system, uses Forgy's high speed
short cuts for left hands sides which fall into ops5's limited
legal lhs, but yaps also allows fully general left hand sides.
Yaps second advantage over ops5 is that it is imbedded in
the Franz lisp flavors system (also from Maryland), so that
one can have several simultaneous yaps objects and send them
messages like add-a-rule, add-object-to-database, etc.

For more information, mail liz@maryland.

Spoken: Mark Weiser     ARPA:   mark@maryland
CSNet:  mark@umcp-cs    UUCP:   {seismo,allegra}!umcp-cs!mark

------------------------------

Date: 1 Oct 84 18:21:18-PDT (Mon)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Windows and Expert Systems
Article-I.D.: wdl1.451

    I've noticed this lately too; I've also seen the claim that ``windows were
developed ten years ago by the AI community'', but the early Alto effort at
PARC, which I saw demonstrated in 1975 by Allen Kay, was not AI-oriented; they
were working primarily on improved user interfaces, including window systems.

                                                John Nagle

------------------------------

Date: 30 Sep 84 8:30:02-PDT (Sun)
From: decvax!mcnc!unc!ulysses!burl!clyde!watmath!water!rggoebel@Ucb-Vax.arpa
Subject: Re: Clarification Regarding Teknowledge's M.1 Product
Article-I.D.: water.20

I've just read what amounts to an advertisement for Teknowledge's
M.1 software product.   I can't believe there isn't something to
be criticized in a product that comes from such an infant technology?
I'd be interested to know what's wrong with M.1?  Will Teknowledge
give it away to universities to teach students about expert systems?
Is SRI-KL using M.1 for anything (note origin of original message)?
On a lighter note, what is novel about a software system that supports
``variables?''

Randy Goebel
Logic Programming and Artificial Intelligence Group
Computer Science Department
University of Waterloo
Waterloo, Ontario, CANADA N2L 3G1
UUCP:   {decvax,ihnp4,allegra}!watmath!water!rggoebel
CSNET:  rggoebel%water@waterloo.csnet
ARPA:   rggoebel%water%waterloo.csnet@csnet-relay.arpa

[I am not aware of any SRI use of M.1, nor do I know of anyone at SRI
who has a financial interest in it.  Many people around the country
have mailboxes on systems where they once worked or otherwise have
incidental access; I assume that is the case here.  An SRI group has
recently come out with its own micro-based expert system toolkit,
SeRIES-PC, a PROSPECTOR derivative.  -- KIL]

------------------------------

Date: 1 Oct 84 22:21:20-PDT (Mon)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Re: Clarification Regarding Teknowle
Article-I.D.: wdl1.453

     I'd like to see them offer a training version of the program for $50 or so
which allowed, say, a maximum of 50 rules, enough to try out the system but
not enough to implement a production application.  This would get the tool
(and the technology) some real exposure.

                                John Nagle

------------------------------

Date: Wed 3 Oct 84 00:05:12-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: re: Induction

Well I guess I don't understand Stan Shebs' point regarding induction
very well.  I agree with everything he said in his message: It is
indeed possible to generate all possible generalizations of some fact
within some fixed, denumerable domain of discourse.  The problem of
induction is to infer PLAUSIBLE beliefs from a finite set of examples.
Shebs is correct in saying that from any finite set of examples, a
very large (usually infinite) set of generalizations can be generated.
He is also correct in saying that--in the absence of any other
knowledge or belief--all of these generalizations are equally
plausible.  The problem is that in common-sense reasoning, all of
these generalizations are not equally plausible.  Some seem (to
people) to be more plausible than others.  This reflects some hidden
assumptions or biases held by people about the nature of the common
sense world.

------------------------------

Date: Thu, 4 Oct 84 15:17:51 pdt
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Seminar - Natural Language Structure

                      BERKELEY COGNITIVE SCIENCE PROGRAM
                                  Fall 1984
                    Cognitive Science Seminar -- IDS 237A

             TIME:                Tuesday, October 9, 11 - 12:30
             PLACE:               240 Bechtel Engineering Center
             DISCUSSION:          12:30 - 2 in 200 Building T-4

         SPEAKER:        Gilles Fauconnier, Linguistics Dept, UC  San
                         Diego & University of Paris

         TITLE:          Roles,  Space  Connectors  &  Identification
                         Paths

         ABSTRACT:       Key aspects of natural language organization
                         involve  a  general  theory  of  connections
                         linking mental constructions.   Logical  and
                         structural  analyses  have  overlooked  this
                         important  dimension,  which  unifies   many
                         superficially    complex    and    disparate
                         phenomena.  I will focus here  on  the  many
                         interpretations  of  descriptions and names,
                         and suggest a reassessment of  notions  like
                         rigidity,  attributivity,  or  ``cross-world
                         identification.''

------------------------------

End of AIList Digest
********************

∂08-Oct-84  1204	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #133    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 Oct 84  12:02:35 PDT
Date: Mon  8 Oct 1984 09:42-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #133
To: AIList@SRI-AI


AIList Digest             Monday, 8 Oct 1984      Volume 2 : Issue 133

Today's Topics:
  Bindings - John Hosking Query,
  Workstations - Electrical CAD/CAE & TI LISP Machine,
  AI Tools - Graph Display,
  Expert Systems - Liability,
  Humor - Theorem Proving Contest,
  Comments - Zadeh & Poker,
  Seminar - First Order Logic Mechanization
----------------------------------------------------------------------

Date: Saturday,  6-Oct-84  2:12:41-BST
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.%edxa@ucl-cs.arpa>
Subject: References wanted

Anyone know where I can find anything by John Hosking,
now of Auckland University New Zealand?  Said to be in
expert systems/knowledge representation field.

------------------------------

Date: 3 Oct 84 17:08:44-PDT (Wed)
From: hplabs!intelca!qantel!dual!amd!turtlevax!ken @ Ucb-Vax.arpa
Subject: Electrical CAE software/hardware
Article-I.D.: turtleva.541

We've been gathering information about CAD/CAE for electrical/computer
engineering and have been deluged with a foot's worth of literature.
No on makes the entire package of what we want, which includes
schematic entry, hierarchical simulation, timing verification, powerful
functional specification language, finite-state machine generator, PAL
primitives, PLA and PROM high-level language specification compiling
down to JEDEC format, driver for a Data I/O or more dependable PROM/PAL
programmer, transient and frequency analysis (SPICE works well here),
symbolic, analytical, and graphical mathematics, etc.

We've accepted the fact that we will need to get several packages of
software, but are prepared to buy no more than 1 extra piece of
hardware, if we can't get software to run on our VAX or Cadlinc
workstations.

Has anyone used any of the available products?  Does anyone have any
recommendations?

Following is a list of suppliers of CAE tools of some sort, for which I
managed to get some literature, and is in no way guaranteed to be
complete:

Altera
Assisted Technology
Avera Corporation
Cad Internet, Inc.
Cadmatics
Cadnetix
Cadtec
CAE Systems
Calma
Chancellor Computer Corporation
Control Data
Daisy
Design Aids, Inc.
Futurenet
GenRad
HHB Softron
Inference Corp.
Intergraph
Interlaken Technology Corp.
Mentor
Metalogic, Inc.
Metheus
Mirashanta
Omnicad Corp.
Phoenix
Racal-Redac
Signal Technology, Inc.
Silvar-Lisco
Step Engineering
Symbolics
Teradyne
Valid
Vectron
Verstatec
Via Systems
VLSI Technology, Inc.
--
Ken Turkowski @ CADLINC, Palo Alto, CA
UUCP: {amd,decwrl,flairvax,nsc}!turtlevax!ken
ARPA: turtlevax!ken@DECWRL.ARPA

------------------------------

Date: Fri 5 Oct 84 16:23:15-PDT
From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
Subject: TI LISP MACHINE

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

Texas Instruments invites ACM attendees (and AIC-ers) to see the new
TI LISP machine demo-ed at the

        San Francisco Hilton
        333 O'Farrel Street
        Imperial Suite Room #1915

        Monday, October 8, 1984
        5:00pm - 8:00pm

Refreshments and hors d'oeuvers.  Bring your ACM badge for admission.

...margaret

------------------------------

Date: Sat 6 Oct 84 23:56:50-PDT
From: Scott Meyers <MEYERS@SUMEX-AIM.ARPA>
Subject: Wanted:  info on printing directed graphs

I am faced with the need to come up with an algorithm for producing
hardcopy of a directed graph, i.e. printing such a graph on a lineprinter
or a V80 plotter.  Rather than just plopping the nodes down helter-skelter,
I will have an entry node to the graph which I will place at the far left
of the plot, and then I will want to plot things so that the edges
generally point to the right.  If anyone has solved this problem or can
give me pointers to places where it has been solved, or can offer any
other assistance, I would very much like to hear from you.  Thanks.

Scott

[Scott could also use a routine printing graphs top to bottom if
that is available.  -- KIL]

------------------------------

Date: Sun, 7 Oct 84 13:47:09 pdt
From: Howard Trickey <trickey@diablo>
Subject: printing graphs

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

I did a program that takes a graph description and produces a TeX input
file which in turn produces a reasonably nice looking graph on the
Dover (\special's are used to draw lines at arbitrary angles;  I can
use Boise by specifying only rectilinear lines, but it doesn't look as
good).  There's no way to use it as is for the output devices mentioned
in the previous message, but the algorithms I used may be of interest.

There can be different types of nodes, each drawn with a
user-specified TeX macro.  The graph description says which nodes there
are and of what type, and what edges there are.  Edges go to and from
symbolically specified points on nodes.  The output looks best when
the graph is acyclic or nearly acyclic, since that's what my graphs
are so I didn't spend time on other cases.

The program isn't robust enough or easy enough to use for general use,
but I can point people to it. If you need the capability badly enough,
it's not too difficult to get used to.  It's written in Franz Lisp.

        Howard Trickey

------------------------------

Date: 3 Oct 84 12:46:11-PDT (Wed)
From: decvax!cwruecmp!atvax!ncoast!rich @ Ucb-Vax.arpa
Subject: AI decision systems - What are the risks for the vendor?
Article-I.D.: ncoast.386

The rapid advance of Artificial Intelligence Software has caused me to
wonder about some of the possible legal problems.

SITUATION:  We are a software vendor that develops an AI software package.
        this package has been tested and appears to be correct in design and
        logic.  Additionally, the package indicates several alternative
        solutions as well as stating that there could be alternatives that
        are overlooked.

        What risk from a legal standpoint does the developer/vendor have to the
        user IF they follow the recommendation of the package AND the decision
        is proven to be incorrect several months later?

I would appreciate your opinions and shall post the compiled responses
to the net.

From:                                  |   the.world!ucbvax!decvax!cwruecmp!
  Richard Garrett @ North Coast Xenix  |       {atvax!}ncoast!rich
  10205 Edgewater Drive: Cleveland, OH |...................................
   (216) 961-3397             \ 44102  |   ncoast (216) 281-8006 (300 Baud)

------------------------------

Date: Sat 6 Oct 84 14:01:30-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Liability

Just as damning as using an incompetent [software] advisor is
failing to use a competent one.  If a doctor's error makes you a
cripple for life, and if he had available (and perhaps even used)
an expert system counceling a better course of treatment, is he
not guilty of malpractice?  Does the doctor incur a different
liability than if he had used/not used a human consultant?

The human consultant would normally bear part of the liability.
Since you can't sue an expert system, do you sue the company
that sold it?  The programmer?  The theoretician who developed
the algorithm?  I'm sure there are abundant legal precedents for
all of the above.

For anyone with the answers to the above, here's an even more
difficult problem.  Systems for monitoring and interpreting
electrocardiograms are commonly adjusted at the "factory" to
match the diagnostic style of the purchasing physician.  Suppose
that the doctor requests that this be done, or even does it
himself.  Suppose further that he is incompetent at this type
of diagnosis (after all, he's buying a system to do it for him),
and that customization to match his preferences can be shown to
degrade the performance of the software.  Is he liable for operating
the system at less than full capability?  I assume so.  Is the
manufacturer liable for making the adjustment, or for providing
him the means of doing it himself?  I would assume that also.
What are the relative liabilities for all parties?

                                        -- Ken Laws

------------------------------

Date: 4 Oct 1984  09:51 EDT (Thu)
From: Walter Hamscher <WALTER%MIT-OZ@MIT-MC.ARPA>
Subject: GSL sponsored Theorem Proving Contest

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

               DATE: Friday, 5 October, 12 noon
               PLACE: 3rd Floor Playroom
               HOST: Reid Simmons

          REAGAN vs. MONDALE THEOREM PROVING CONTEST

To help the scientific community better assess this year's
presidential candidates, GSL (in conjunction with the Laboratory
for Computer Research and Analysis of Politics) proudly presents
the first Presidential Theorem Proving Contest.  The candidates
will have 10 minutes to prepare their proofs, 10 minutes to
present, and then 5 minutes to criticise their opponents' proofs.
A pseudorandom number generator will be used to determine the
order of presentation.  The candidates will be asked to
prove the following theorem:

* Let (a + a + a ...) be a conditionally convergent series.
        1   2   3
  Show by construction that there exists a rearrangement of
  the a  such that
       i
            lim      (a + ... a ) = 0.
          n -> inf     1       n

Note:
  To increase public interest in this contest, the theorem
  will actually be phrased in the following way:

  Let (deficit    + deficit    + deficit    ...) be a
              1980         1981         1982

  series with both positive and negative terms.
  Rearrange the terms so that:

            lim      (deficit    + ... deficit    ) = $ 0.00
         year -> inf         1980             year

------------------------------

Date: 2 Oct 84 21:50:35-PDT (Tue)
From: hplabs!ames!jaw @ Ucb-Vax.arpa
Subject: Re: Humor & Seminar - Slimy Logic
Article-I.D.: ames.548


     This B-Board article [on slimy logic] is a master parody, right down
to the "so to speak" mannerism.  Thanks for the entertainment!

     I took a couple of courses from Professor Zadeh at Berkeley in the 70s,
not just in Fuzzy Logic, but also formal languages, where we all struggled
with LALR(1) lookahead sets.  The fuzzy controversy was raging then, with
Prof. William Kahan, numerical analyst, being Zadeh's arch-enemy.  Kahan was a
natural devil's advocate, himself none too popular for raving on, in courses
on data structures, a bit muchly about the way CDC 6400 Fortrash treated
roundoff of the 60th bit.  Apparently, there's some bad blood over the size
of Zadeh's grants (NSF?) for his fuzzy baby.  They both have had tenure for
years, so maybe a pie-throwing contest would be appropriate.

     Anyway, looks like the fuzzy stuff is now making the rounds at MIT.
Zadeh, who ironically wrote the book on linear systems (circa 1948), at
least got the linguistics department hopping with the fuzzies, influencing
the Lakoffs (George, mainly) to trade in their equally ad hoc transformational
grammars for fuzzy logic.  Kinda soured me on natural language theory, too.
I mean, is there life after YACC?

     Old Lotfi has left an interesting legacy via his children.  Zadeh's
daughter, I understand is a brilliant lawyer.  One son, after getting his
statistics Ph.D. at 20 or so, claims to have draw poker figured out.
Bluffing is dealt with by simple probability theory.  As I remember,
"Winning Poker Systems" is one of those "just-memorize-the-equivalent-of-
ten-phone-numbers-for-instant-riches" books.  He worked his way through school
with funds won in Emeryville poker parlors.  Not too shabby, but not too
fuzzy, either ...

        -- James A. Woods  {ihnp4,hplabs,philabs}!ames!jaw  (jaw@riacs.ARPA)


[Dr. Zadeh also invented the Z-transform used in digital signal processing
and control theory.  -- KIL]

------------------------------

Date: 5 Oct 84 18:31:33-PDT (Fri)
From: hplabs!hao!seismo!rochester!rocksanne!sunybcs!gloria!colonel @
      Ucb-Vax.arpa
Subject: Re: fuzzy poker
Article-I.D.: gloria.578

    One son, after getting his statistics Ph.D. at 20 or so, claims to
    have draw poker figured out. ...

When I was working with the SUNY-Buffalo POKER GROUP, we managed to
verify some of N. Zadeh's tables with hard statistics.  Anybody who's
interested can find some of our results in Bramer's anthology ←Computer
Game-Playing: Theory and Practice← (1983).
--
Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 05 Oct 84  1318 PDT
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Continuing Seminar - FOL & First Order Logic Mechanization

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Seminar on FOL: a mechanized interpretation of logic
presented by Richard Weyhrauch

Time:  4:15 to 6:00
Date:  Alternate Tuesdays begining October 9
Place: Room 252 Margret Jacks Hall

The topic of this seminar is a description of FOL, a collection of structures
that can be used to provide a mechanized interpretation of logic.  We will
present specific examples of interest for logic, philosophy and artificial
intelligence to illustrate how the FOL structures give formal solutions,
or at least shed light on, some classical problems.  We will also describe
the details of FOL, a computer program for constructing these structures.
This provides a link between logic and AI.

Mechanization is an alternative foundation to both constructive and
classical logic.  I have always found constructive foundations
unconvincing.  Taken by itself, it fails to explain how we can understand
classical semantics well enough to make the distinction.  Even more -- a
philosophically satisfactory account of reasoning must explain why in the
comparatively well behaved case of mathematical foundations the classical
arguments carry conviction for practising mathematicians.

On the other hand the use of set theoretic semantics also seems to require
infinite structures to understand elementary arguments.  This conflicts
with the simple observation that people understand these arguments and they
are built from only a finite amount of matter.

Mechanization provides a semantics that is both finitist and at the same
time allows the use of classical reasoning.

------------------------------

Date: Sat, 6 Oct 84 13:56:04 pdt
From: Vaughan Pratt <pratt@Navajo>
Subject: FOL seminar

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

    On the other hand the use of set theoretic semantics also seems to
    require infinite structures to understand elementary arguments.  This
    conflicts with the simple observation that people understand these
    arguments ...

In my day it was not uncommon for students to reason about all the reals in a
finite amount of time - in fact it was even required for exams, where you only
had three hours.  Whatever has modern mathematics come to?

    ... and they [people] are built from only a finite amount of matter.

By weight and volume, yes, but with elementary particles breeding like
rabbits one sometimes wonders about parts count.  Now here's a problem
spanning particle physics and number theory: if there exists such a thing
as an elementary particle, and if there are a fixed finite number of them in an
uncharged hydrogen atom at absolute zero, is that number prime?
-v

------------------------------

End of AIList Digest
********************

∂09-Oct-84  0024	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #134    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Oct 84  00:23:58 PDT
Date: Mon  8 Oct 1984 23:03-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #134
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Oct 1984      Volume 2 : Issue 134

Today's Topics:
  Seminars - AI Control Design & Fault Diagnosis & Composite Graph Theory,
  Lectures - Logic and AI,
  Program - Complexity Year at MSRI
----------------------------------------------------------------------

Date: Mon 8 Oct 84 09:31:31-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Seminar - AI Control Design

From the IEEE Grid newsletter for the SF Bay Area:

Some very exciting new ideas on the role of expert systems in control
design for AI will be presented at the Oct. 25 meeting of the Santa
Clara Valley Control Systems, Man and Cybernetics Society.

The talk, by Dr. Thomas Trankle and Lawrence Markosian of Systems
Control Technology, will report work in progress to develop an AI
system that implements a linear feedback control designer's expert
knowledge.  This AI system is a planning expert system written in LISP,
and has knowledge of linear control design rules and an interface
with a control CAD package.

The LISP code represents the design rules as operators that have
goals, preconditions, and side effects.  Higher-level operators
or "scripts" represent expert design procedures.  The control
design process takes the form of a recursive goal-directed search,
aided by the expert designer's heuristics.

Cocktails at 6:30 pm, dinner ($11) at 7:00, presentation at 8:00.
Rick's Swiss Chalet, 4085 El Camino, Palo Alto
Reservations by Oct. 24, Council Office, (415) 327-6622.

------------------------------

Date: Mon 8 Oct 84 09:48:09-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Reasoning About Fault Diagnosis with LES

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, October 12, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Walter Perkins
             Lockheed Palo Alto Research & Development

ABSTRACT:    Reasoning About Fault Diagnosis with LES

The Lockheed Expert System (LES) is a generic framework for helping
knowledge engineers solve problems in diagnosing, monitoring,
designing, checking, guiding, and interpreting.  Many of the ideas of
EMYCIN were incorporated into its design, but it was given a more
flexible control structure.  In its first "real" application, LES
was used to guide less-experienced maintenance personnel in the fault
diagnosis of a large electronic signal-switching network.  LES used
not only the knowledge of the expert diagnostician (captured in the
familiar form of "IF-THEN" rules), but also knowledge about the
structure and function of the device under study to perform rapid
isolation of the module causing the failure.  In this talk we show how
the topological structure of the device is modeled in a frame
structure and the troubleshooting rules of the expert are conveniently
represented using LES's case grammar format.  We also explain how
"demons" are used to setup an agenda of relevant goals and subgoals.
The system was fielded in November 1983, and is being used by Lockheed
technicians.  A preliminary evaluation of the system will also be
discussed.  LES is being applied in a number of other domains which
include design verification, satellite communication,
photo-interpretation, and hazard analysis.

Paula

------------------------------

Date: Sat 6 Oct 84 15:26:34-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Seminar - Composite Graph Theory

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

AFLB talk
10/11/84 - Joan Feigenbaum (Stanford):

            Recognizing composite graphs is equivalent to
                      testing graph isomorphism

In this talk I will explore graph composition from complexity
theoretic point of view.  Given two graphs G1 and G2, we construct the
composition G = G1[G2] as follows: For each node in G2, insert a copy
of G1.  If two copies correspond to nodes that are adjacent in G2,
then draw in all possible edges x -- y such that x is in one copy and
y is in the other.  A graph that can be expressed as the composition
of two smaller graphs is called composite and one that cannot is
called irreducible.

Composite graphs have a great deal of structure and their abstract
mathematical properties have been studied extensively.  In particular,
Harary and Sabidussi have characterized the relationships between the
automorphism groups of G1 and G2 and the automorphism group of their
composition.  Graph composition has been used by Garey and Johnson and
Chv\'atal to study NP-complete problems.  Garey and Johnson used it to
derive upper bounds on the accuracy of approximation algorithms for
graph coloring.  Chv\'atal showed that the Hamiltonian circuit problem
remains NP-complete even if the input graph is known to be composite.
In this talk, I consider what seems to be a more basic question about
composite graphs; namely, how difficult are they to recognize?

The main result I will give is that testing whether a graph is
composite is equivalent to testing whether two graphs are isomorphic.
In the proof that recognizing composite graphs is no harder than
testing graph isomorphism, I will give an algorithm that either
declares a graph irreducible or finds a non-trivial decomposition.
This distinguishes graph- decomposition from integer-factorization,
where primality-testing and factoring are not known to have the same
complexity.  The inherent difficulty of the recognition problem for
composite graphs gives some insight into why some difficult graph
theoretic problems, such as Hamiltonian circuit, are no easier even if
the inputs are known to be composite.  Furthermore, assuming P does
not equal NP, graph isomorphism is one of the most important problems
for which neither a polynomial time algorithm nor a proof that there
cannot be such an algorithm is known.  Perhaps examining a problem
that is equivalent to it will yield insight into the complexity of the
graph isomorphism problem itself.  For example, if all irreducible
graphs have succinct certificates, then graph isomorphism is in Co-NP.

If there is time, I will also show that for cartesian multiplication,
another way to construct product graphs, the recognition problem is in
P.  This talk presents joint work with Alex Schaffer.

***** Time and place: October 11, 12:30 pm in MJ352 (Bldg. 460) ****

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
                                                - Andrei Broder

------------------------------

Date: Mon, 8 Oct 84 15:00:32 edt
From: minker@maryland (Jack Minker)
Subject: Lectures - Logic and AI at Maryland, Oct. 22-26


                     FINAL ANNOUNCEMENT
                            WEEK
                             of
       LOGIC and its ROLE in ARTIFICIAL INTELLIGENCE
                             at
                 THE UNIVERSITY OF MARYLAND
                    OCTOBER 22-26, 1984


The Mathematics and  Computer  Science  Departments  at  the
University  of Maryland at College Park are jointly sponsor-
ing a Special Year in  Mathematical  Logic  and  Theoretical
Computer Science.  The week of October 22-26 will be devoted
to Logic and  its  role  in  Artificial  Intelligence.   The
titles and abstracts of the five distinguished lectures that
are to be presented are as follows:


                    Monday, October 22:

      RAYMOND REITER (University of British Columbia)

   LOGIC FOR SPECIFICATION: DATABASES, CONCEPTUAL MODELS
          AND KNOWLEDGE REPRESENTATION LANGUAGES.


AI systems and databases have  a  feature  in  common:  they
require  representations  for  various  aspects  of the real
world.  These representations are meant to be  queried  and,
in  response to new information about the world, modified in
suitable ways.   Typically,  these  query  and  modification
processes require reasoning using the underlying representa-
tion of the world as premises.  So, it  appears  natural  to
use   a  suitable  logical  language  for  representing  the
relevant features of the world, and  proof  theory  for  the
reasoning.  This is not the normal practise in databases and
AI. The representations used assume a variety of forms, usu-
ally  bearing  little  or  no  resemblance to logic.  In AI,
examples of such representation  systems  include:  semantic
networks,  expert  systems,  and  many  different  knowledge
representation languages such as KRL, KL-ONE, FRL.  In data-
bases,  example  representation  systems  are the relational
data model, and various conceptual or semantic  models  like
TAXIS  and the entity-relationship model. The point of these
representation systems is that they provide their users with
computationally  efficient  ways  of representing, and using
the knowledge about an application domain.  The natural role
of  logic  in  databases and AI is a language for specifying
representation systems.  On this view, one must  distinguish
between  the  abstract  specification,  using  logic, of the
knowledge content of a database or AI application,  and  its
realization  as  a  representation system.  This distinction
has pleasant consequences:

      1.  The  logical  specification  provides  a  rigorous
      semantics  for the representation system realizing the
      specification.

      2. One can prove  the  correctness  of  representation
      systems with repect to their logical semantics.

      3. By taking seriously the problem of logically speci-
      fying an application, one discovers some rich and fas-
      cinating epistemological issues e.g. the centrality of
      non-monotonic reasoning for representation systems.


                   Tuesday, October 23:

            JOHN McCARTHY (Stanford University)

               MATHEMATICS OF CIRCUMSCRIPTION


Circumscription (McCarthy 1980, 1984) is a  method  of  non-
monotonic  reasoning proposed for use in artificial intelli-
gence. Let A(P) be a sentence expressing  the  facts  "being
taken into account", where P stands for a "vector" of predi-
cates regarded as variable.  Let E(P,x) be a  wff  depending
on  a  variable x and the Ps.  The circumscription of E(P,x)
is a second order formula in P expressing the  fact  that  P
minimizes  lambda  x.E(P,x)  subject to the facts A(P).  The
non-monotonicity arises, because augmenting  A(P)  sometimes
reduces  the conclusions that can be drawn.  Circumscription
raises mathematical problems similar to those that arise  in
analysis  in  that  it involves minimization of a functional
subject  to  constraints.   However,  its  logical   setting
doesn't  seem  to  permit  direct  use  of  techniques  from
analysis.  Here are some open questions that will be treated
in the lecture.

      1. What is the relation between minimal models and the
      theory generated by the circumscription formula?

      2. When do minimal models exist?

      3. The circumscription formula is second order.   When
      is it equivalent to a first order formula?

      4.  There  are  several  variants  of  circumscription
      including  successive circumscriptions and prioritized
      circumscription.  What are the relations  among  these
      variants?

      References:

      McCarthy, John (1980):
      "Circumscription - A Form of Non-Monotonic Reasoning",
      Artificial   Intelligence,  Volume  13,  Numbers  1,2,
      April.

      McCarthy, John (1984):
      "Applications of Circumscription to Formalizing Common
      Sense  Knowledge".   This  paper is being given at the
      1984 AAAI conference on non-monotonic reasoning and is
      being submitted for publication to Artificial Intelli-
      gence.


                     Wednesday, October 24:

            MAARTEN VAN EMDEN (University of Waterloo)

      STRICT AND LAX INTERPRETATIONS OF RULES IN LOGIC PROGRAMMING


      The  strict  interpretation  says  only that is admit-
      ted  which   is  explicitly allowed by a rule. The lax
      interpretation  says  only that is excluded  which  is
      explicitly  disallowed.  This  distinction  is  impor-
      tant in mathematics and in  law,  for  example.  Logic
      programs   also  are  susceptible  to both interpreta-
      tions. We discuss  the  use  of  fixpoint   techniques
      to  determine Herbrand models of  logic  programs.  We
      find that least fixpoints and least models  correspond
      to  the strict interpretation  and  characterize  suc-
      cessful  finite  computations   of   logic   programs.
      Greatest  fixpoints  and greatest models correspond to
      the lax interpretation and  are  closely   related  to
      negations inferred by finite failure and to terms con-
      structed by certain infinite computations.


                      Thursday, October 25:

                JON BARWISE (Stanford University)

                        CONSTRAINT LOGIC.


      Constraint Logic is based on a semantics that grew out
      of  situation  semantics,  but  on a syntax similar to
      that from first-order logic.  The sematics is not car-
      ried out in set theory, as is usual in logic, but in a
      richer theory I call Situation Theory, a theory  about
      things  like  situations, roles, conditions, types and
      constraints.  While the syntax is not so unusual look-
      ing,  the  connection between the syntax and semantics
      is much more dynamic than  is  in  traditional  logic,
      since  the interpretation assigned to a given *use* of
      some expression will depend on context, in particular,
      on  the  history of the "session".  For example, vari-
      ables are interpreted as denoting roles, but different
      uses  of  a  given  variable x may denote increasingly
      constrained roles as a session proceeds.  This is  one
      feature  that  makes Constraint Logic interesting with
      regard to AI  in  general  and  with  regard  to  non-
      monotonic logic in particular.


                       Friday, October 26:

           LAWRENCE HENSCHEN (Northwestern University)

      COMPILING CONSTRAINT-CHECKING PROGRAMS IN DEDUCTIVE DATABASES.


      There are at least two kinds of formulas in the inten-
      sional  database  which  should always be satisfied by
      the  interpretations  corresponding  to  the   various
      states  of  the  database -- definitions and integrity
      constraints.  In our approach, formulas  defining  new
      relations  are  used in response to queries to compute
      portions of those defined relations; such formulas are
      therefore  automatically  satisfied  by the underlying
      database state.  On  the  other  hand  integrity  con-
      straints may need to be checked each time the database
      changes.  Of course, we believe there are  significant
      advantages  in  being  able  to express integrity con-
      straints in a non-procedural way, such as  with  first
      order  logic.   However, reevaluating an entire first-
      order  statement would be wasteful as normally only  a
      small portion of the database needs to be checked.  We
      present (resolution-based) techniques  for  developing
      from   first-order   statements  efficient  tests  for
      classes of updates.  These tests can be  developed  at
      database creation time, hence are compiled, and can be
      applied before a  proposed  update  is  made  so  that
      failure does not require backing out.


     Lectures will be given at:

                  MWF 11:00 AM - 12:30 PM
                  TTH 10:00 AM - 11:30 AM

     Location: Mathematics Building, 3rd Floor Room Y3206

     The lectures are open to the public.  If  you  plan  to
attend  kindly  notify  us  so  that we can make appropriate
plans  for  space.  We regret  that all  funds  available to
support junior faculty and graduate students have been allo-
cated.  For additional information contact:

                        Jack Minker
               Department of Computer Science
                   University of Maryland
                   College Park, MD 20742
                       (301) 454-6119
                      minker@maryland

------------------------------

Date: Mon, 8 Oct 84 15:24:48 pdt
From: ann%ucbernie@Berkeley
Subject: Program - Complexity Year at MSRI

      [Forwarded from the Univ. of Wisconsin by Udi@WISC-RSCH.]
           [Forwarded from the SRI bboard by Laws@SRI-AI.]


                         COMPLEXITY YEAR AT
               MATHEMATICAL SCIENCES RESEARCH INSTITUTE


     A year-long research program in computational complexity will take
place at the Mathematical Sciences Research Institute, Berkeley, California,
beginning in August, 1985.  Applications are solicited for memberships in
the Institute during this period.  The Institute will award eight or more
postdoctoral fellowships to new and recent Ph.D.'s who intend to participate
in this program.  These fellowships are generally for the entire year, but
half-year awards are also possible.  It is hoped and expected that members
at the more senior level will come with partial or full support from sab-
batical leaves and other sources.  Memberships for any period are possible,
although, for visits of less than three months, Institute support is limited
to awards to help offset living expenses.


     The Program Committee for the complexity year consists of Richard Karp
and Stephen Smale (co-chairmen) and Ronald Graham.  The program will emphasize
concrete computational problems of importance either within mathematics and
computer science or in the application of these disciplines to operations
research, numerical computation, economics and other fields.  Attention will
be given both to the design and analysis of efficient algorithms and to the
inherent computational complexity of problems.  Week-long workshops are planned
on topics such as complexity theory and operations research, complexity theory
and numerical analysis, algebraic and number-theoretic computation, and
parallel and distributed computation.  Programs in Mathematical Economics
and in Geometric Function Theory will take place concurrently with the
Computational Complexity program.


     Address inquiries and applications to:

                Calvin C. Moore, Deputy Director
                Mathematical Sciences Research Institute
                2223 Fulton St., Room 603
                Berkeley, California   94720

     Applicants' files should be completed by January 1, 1985.

     The Institute is committed to the principles of Equal Opportunity and
Affirmative Action.

------------------------------

End of AIList Digest
********************

∂10-Oct-84  1509	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #135    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Oct 84  15:06:35 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Wed 10 Oct 1984 11:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #135
To: AIList@SRI-AI


AIList Digest           Wednesday, 10 Oct 1984    Volume 2 : Issue 135

Today's Topics:
  Expert Systems - NL Interfaces & Training Versions,
  AI Reports - Request for Sources & Computer Decisions Article,
  News - TI Lisp Machines & MCC,
  AI Tools - Printing Directed Graphs,
  Law - Liability for Expert Systems
----------------------------------------------------------------------

Date: 7 Oct 84 22:43:39-PDT (Sun)
From: hplabs!sdcrdcf!trwrba!cepu!ucsbcsl!discolo @ Ucb-Vax.arpa
Subject: Writing natural language/expert systems software.
Article-I.D.: ucsbcsl.172

I will be writing an simple expert system in the near future and was
wondering the advantages and disadvantages of writing something like
that in Prolog or Lisp.  I seem to prefer Prolog, even though I don't
know either one very well yet.  Are there any other languages out there
which are available under 4.2BSD for this purpose?

I would appreciate replies via mail.  Thanks.

uucp: ucbvax!ucsbcsl!discolo
arpa: ucsbcsl!discolo@berkeley
csnet: discolo@ucsb
USMail: U.C. Santa Barbara
        Department of Computer Science
        Santa Barbara, CA  93106
GTE: (805) 961-4178

------------------------------

Date: 9 Oct 84 3:42:10-PDT (Tue)
From: hplabs!kaist!kiet!sypark @ Ucb-Vax.arpa
Subject: Natural Language Processing Systems
Article-I.D.: kiet.232

Please send me the informations about natural language processing
systems which is machine translator or i/o interface for expert systems.
What I want is as following.
        1. Overview of the system
        2. Source is available ?
        3. How much price ?

------------------------------

Date: 9 Oct 84 09:14 PDT
From: Feuerman.pasa@XEROX.ARPA
Subject: Training version of Expert System Tools

John Nagle brings up a good idea when talking about M.1.  One major
problem in trying to investigate various Expert System Building Tools is
that they are very expensive just to buy to find out whether they
actually lend themselves well to solving a problem.  One never really
can find out what it is like to use a system from a canned demo or user
guides.  The idea of having a training version (a stripped down version
that doesn't allow full-sized applications) could give someone enough
experience with the system to allow them to know what sorts of
application a tool is good for.  (Undoubtedly this would be viewed as a
bad marketing ploy; why would anyone want to come up with a cheap system
that would probably only keep someone from buying the full-fledged
expensive version.)

With that comment, I pessimistically ask:  Does anyone know of any tool
out there that has such a stripped down training version?

--Ken <Feuerman.pasa@XEROX.ARPA>.

------------------------------

Date: 9 Oct 1984 16:32:15 EDT (Tuesday)
From: Charles Howell <m15434@mitre>
Subject: Various Technical Reports


I would like to know what Technical Reports  are  available  from
some of the leading centers for research in AI and related fields
(how's that for a broad topic?).  Any  addresses  of  Publications
Offices (or whatever) that have a catalog and ordering / purchase
information will be appreciated. Implicit in this  request  is  a
request  for  suggestions  about  what  places  are  putting  out
interesting reports; any and all suggestions will  be  cheerfully
accepted!  I'll collect the answers and post them to the AIList if
there is much response.

Thanks,
Chuck Howell      Howell at MITRE

------------------------------

Date: Tue 9 Oct 84 22:54:27-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Computer Decisions Article

I just ran across an AI-in-business article in the August issue
of Computer Decisions.  It features a roundtable of 14 consultants
and business bigwigs.  Phone numbers and reader service numbers
are given for 18 AI vendors, and mention is made of an annual
AI report -- AI Trends '84, a description of the technologies and
profile of 50 key vendors by DM Data Inc., Scottsdale AZ, $195,
(602) 945-9620.  The article includes advice on getting started
in AI (buy some Lisp machines, hire some hackers and AI experts,
and expect some failures), a short glossary (including Lisp,
a new language ...), and a short bibliography (including The
Mythical Man-Month).

                                        -- Ken Laws

------------------------------

Date: Tue 9 Oct 84 15:19:30-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: news from Austin: TI Explorer, MCC, and more

[ from the Austin American Statesman, p. D6 - Oct 9, 84 ]

            TI Explorer finds new path
        =================================

Texas Instruments in Austin has landed a major business prize: a
multi-million-dollar order for up to 400 of its highly sophisticated Explorer
symbolic processing systems from the Laboratory for Computer Science at MIT.
The computers will be bought over the next 2 years to establish the world's
largest network of LISP machines involved in computer research.  TI officials
said the order is significant in view of the fact that only about 1,000 of
the specialized computers are in existence. TI plans to deliver 200 machines
in 1985 and 200 in 1986.

     Boing joins MCC as 19th member of the consortium
   ====================================================

... paying a sign-up fee of $500,000.  The cost for joining goes up
to $1-million on Jan 1.

   There are 4 seperate research programs at MCC, with a combined annual
budget of more than $50 million.  Boing reportedly has joined only one
program thus far, an effort to find new ways to connect complex computer
chips with the equipment the chips are supposed to control, but is
considering joining the other three as well.

MCC's managers are especially eager for Boing to join the artificial
intelligence program.  They believe Boing's participation in that expensive
program would draw other aerospace companies to it, spreading out the expense
and making it a cheaper deal for everyone involved.

Boing is the fourth major aerospace defense contractor to become an MCC
member [following Rockwell, Lockheed, and Martin Marietta].

[ in other news:  real estate prices and traffic jams are coming along nicely,
thank you.   the city is being sued by the state for polluting the river and
trying to sue everyone connected with building 2 nuclear power reactors, which
are WAY overdue and WAY over-budget, and not close to being finished.  Austin
is still trying to sell its 16% of the project, and given that nobody wants to
buy it, is close to pushing for abandoning the whole project.    So you really
don't want to come here .....    (-: I don't make the news, only report it ]

------------------------------

Date: Tue 9 Oct 84 17:49:39-PDT
From: PENTLAND@SRI-AI.ARPA
Subject: TI's new Lisp Machines

News about TI's new Lisp Machines:

Timing figures, 1/60th of a second.
Both TI and 3600 were 1Mword memory, 300Mbyte disk

op              TI      3600            comment
---------------------------------------------------------------
bitblt          270     441     shows basic memory cycle time
floating pt     23      17      //,* about the same, TI has 25 bit number
cons            25-40   17-40   depends somewhat on paging
paging          225-280 160-450 same transfer rate, seek time 50% more for TI
create flavor
  instance      140     52      not fully microcoded yet
send msg        52      21      not fully microcoded yet
function call   31      16      not fully microcoded yet
32bit floating  33      17      includes consing in TI machine

It appears that by the April delivery date, the TI will be the equal of a
3600.  It is already much more than an LMI, Cadr or LM2 (I ran these
benchmarks on an LM2, it was 1/2 to 1/5 the TI in all cases).
Ask for the benchmark programs if you are interested in details.

------------------------------

Date: Mon, 8 Oct 84 16:58 CDT
From: Jerry Bakin <Bakin@HI-MULTICS.ARPA>
Subject: Re: Wanted: info on printing directed graphs

Some friends of mine came up with such a program.  I have included its
first comment below.

It is written in Pascal, somewhere; I have a version I rewrote (i.e.,
force translated) into Multics PL/I.  If you can use either one, let me
know.  We do not support FTP, so if their is a wide demand for this, I
may ask someone else to take it off my hands.

There might be a small problem, they are currently selling a some of
their software, I have to find out if this is a portion of that software.

Even if it is, the following provided a source for more information.


(* TRPmod - A routine to print N-ary trees on any character printer.  This
   routine takes as input an arbitrary N-ary tree, some interface routines, and
   assorted printer parameters and writes a pictorial representation of that
   tree using an output routine provided in the call to treeprint.  The tree is
   nicely formatted and is divided into vertical stripes that can be taped
   together after printing.  Options exist to print the tree backwards or
   upside down if desired.

   The algorithm for treeprint originally appeared in "Pretty-Printing of
   Trees", by Jean G. Vaucher, Software-Practice and Experience, Vol. 10,
   pages 553-561 (1980).  The algorithm used here has been modified to support
   N-ary tree structures and to have more sophisticated printer format control.
   Aside from a common method of constructing an ancillary data structure and
   some variable names, they are now very dissimilar.

   treeprint was written by Ned Freed, Kevin Carosso, and Douglas
   Grover at Harvey Mudd College. (714) 621-3219 (ask for the Mathlib
   Director)

------------------------------

Date: 6 Oct 84 8:51:42-PDT (Sat)
From: decvax!mcnc!idis!cadre!geb @ Ucb-Vax.arpa
Subject: re: liability for expert systems
Article-I.D.: cadre.57

This is a subject that we are quite interested in as we
develop medical expert systems.  There has been no court
case nor precedent nor law covering placement of blame
in the cases of errors in expert systems.  The natural
analogy would be medical textbooks.  As far as I know,
no author of a textbook has been found liable for errors
that resulted in mistreatment of a patient.  Therefore,
the logical liability should lie with the treating physician
to properly apply the knowledge.

Having said this, it is best to recognize that customs such
as this were developed in a much different society of 100
years ago.  Now every possible person in a case is considered
fair game and undoubtedly until a court rules or legislation
is passed, you must consider yourself at risk if you distribute
an expert system.  Unfortunately, there is no malpractice
insurance available for programmers and you will find a clause
in just about any other insurance that you might carry that
states that the insurance you have doesn't cover any lawsuits
stemming from the practice of your profession.  Sorry.

------------------------------

Date: 10 October 1984 0854-PDT (Wednesday)
From: bannon@nprdc (Liam Bannon (UCSD Institute for Cognitive Science))
Reply-to: bannon <sdcsla!bannon@nprdc>
Subject: Liability and Responsibility wrt expert systems

        I was interested in the messages raising the issue of where
responsibility lies if a person follows the advice of an AI system and
it turns out to be wrong, or where the person disregards the computer
system advice, but the system turns out to be right (AI Digest V2#133).
        I am not a lawyer or AI system builder,
but I am concerned about some of the social dimensions
of computing, and have been concerned about how expert systems might
actually be used in the work environment.  There have been few
full-length papers on this topic, to my knowledge.  One that I have
found interesting is that by Mike Fitter and Max Sime "Creating
Responsive Computers: Responsibility and Shared Decision-Making" which
appeared in the collection H. Smith and T. Green (Eds.) Human
Interaction with Computers (Academic Press, 1980).  They point out "the
possibility that a failure to use a computer might be judged negligent
if, for example, a physician neglected to ask a question, the answer
to which was crucial to a diagnosis, AND a computer system would have
asked the question." This hinges on a famous 1928 case in the US, called
the T.J. Hooper, where a tugboat owner was found negligent for not having
radio sets on them, thus not hearing radio reports of bad weather which
would have made them seek safety avoiding the loss of the barges
which the tugs had in tow - this despite the fact that at that
time radio was only used by one tugboat company!
        This raises a host of interesting questions about how expert
systems could/should be used, especially in medicine, where the
risks/benefits are highest. Comments?
                                        -liam bannon

------------------------------

End of AIList Digest
********************

∂11-Oct-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #136    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Oct 84  11:48:08 PDT
Date: Thu 11 Oct 1984 09:52-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #136
To: AIList@SRI-AI


AIList Digest           Thursday, 11 Oct 1984     Volume 2 : Issue 136

Today's Topics:
  AI Tools LMI (Uppsala) Prolog & Kahn's DCG's,
  Law - Liabilities of Sofware Vendors,
  Games - Preliminary Computer Chess Results,
  Psychology - Distributed Intelligence,
  Linguistics - Sastric Sanskrit,
  Conference - Computational Linguistics Call for Papers
----------------------------------------------------------------------

Date: 8 Oct 84 13:53:26-PDT (Mon)
From: hplabs!sdcrdcf!trwrba!logico!burge @ Ucb-Vax.arpa
Subject: LMI (Uppsala) Prolog + Kahn's DCG's: User Experiences
Article-I.D.: logico.124

Does anyone have any experiences to relate about "LM-Prolog", implemented in
Zetalisp at the University of Uppsala by Ken Kahn and Mats Carlsson? And/or
of the DCG and "Grammar Kit" that comes with it? (We've been using the DEC-11
implementation for several years, but now it's time to expand...)

Also, our site is new to the net, and if anyone could send me previous
items, it would help me find out what all has been happening out there...!!

--John Burge                                                    [818] 887-4950
LOGICON, Operating Systems Division, 6300 Variel #H, Woodland Hills, Ca. 91367

------------------------------

Date: Wed, 10 Oct 84 13:55:07 cdt
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: Liabilities of Sofware Vendors

Maybe I am being naive or something, but I don't see why AI software should
be different from any other when it comes to the liability of the vendor.
My attorney has written me a boilerplate contract that contains a clause
something to the effect that "vendor is not liable for third-party or
consequential damages that result from the use of the product."
Doesn't that take care of the problem?  If not, maybe I had better find
an expert attorney system.

------------------------------

Date: Wed 10 Oct 84 01:57:07-PDT
From: Donald S. Gardner <GARDNER@SU-SIERRA.ARPA>
Subject: Preliminary computer chess results

The computer chess championship is almost over and BELLE has severly
bitten the dust.  This special purpose hardware (with ~1600 integrated
circuits and a PDP-11/23) first tied a program called Phoenix running
on a VAX-11/780 and then was beat by NuChess running on a CRAY 1M.
NuChess was the program previously called chess 4.7 and was the champion
until 1980 when it was beaten by BELLE.

The first place winner during the fourth round was declaired to be
the program CRAY BLITZ running on a cluster of 4 (FOUR) CRAY's.
This system checks in at 420 million instructions per second.
Now, CRAY time costs appx $10,000 per hour per computer and each
game lasts around 5 hours.  This adds up to a cool $1M in computer time!
Of course that is in "funny money", but still impressive.  There was also
a program from Canada which ran on 8 Data General computers (Novas and
an Eclipse), two more CRAYs (80 mips each), two Amdahl computers (10 &
13 mips), one CDC Cyber 176 (35 mips) and a Burrough's 7800 (8 mips).

------------------------------

Date: 9 Oct 84 11:53:24 PDT (Tuesday)
From: Jef Poskanzer <Poskanzer.PA@XEROX.ARPA>
Reply-to: SocialIssues↑.PA@XEROX.ARPA
Subject: Distributed Intelligence

          [Excerpted from Human-Nets Digest by Laws@SRI-AI.]

By Erik Eckholm
New York Times

    Computer buffs call it "flaming."  Now scientists are documenting
and trying to explaim the surprising prevalence of rudeness,
profanity, exultation and other emotional outbursts by people when
they carry on discussions via computer.  [...]  "It's amazing," said Kiesler.
"We've seen messages sent out by managers - messages that will be seen
by thousands of people - that use language normally heard in locker rooms."

[...] in addition to calling each other
more names and generally showing more emotion than they might face to
face, people "talking" by computer took longer to agree, and their
final decisions tended to involve more risks than those reached by
groups meeting in person.  [...]

    "This is unusual group democracy," said Sara Kiesler, a
psychologist at Carnegie-Mellon.  "There is less of a tendency for
one person to dominate the conversation, or for others to defer to the
one with the highest status."  [...]

------------------------------

Date: 9 Oct 1984 11:09-PDT (Tuesday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Sastric Sanskrit

        I would like to respond to recent criticisms concerning Sastric
Sanskrit.
        Firstly, Kiparsky is confusing Sanskrit in general from Sastric
Sanskrit.  His example, "bhikshuna rajna..." is NOT Sastric Sanskrit but
plain ordinary Classical Sanskrit.  I did not mean to imply that lack of
word order is a sufficient condition for unambiguity, only that it is
an indication.
        As to Dr. Dyer's comments: Yes, a parser will be needed due to
the difficulty with translations but this is due to the nature of what
one translates into.  In the case of English, the difference between
the two languages creates the difficulty in translation, not inherent
complexities in Sastric Sanskrit. The work I mentioned was edited by
Pandit Sabhapati Sharma Upadhyaya in Benares, India and published
recently(1963) by the Chowkhamba Sanskrit Series Office.  Also, there
is something like a concept of scripts in that subsets of discourse
(possibly nested) are marked off("iti" clauses) and therefore the
immediate context is defined.
        My comments about English stem from its lack of case.  Languages
like Latin are potentially capable of rendering logical formulation
with less ambiguity since a mapping from its syntactic cases can be made
to a set of "semantic cases", depending on how good the case system is.
Sanskrit has 8(including the vocatice) and a correspondance(though
not complete) is made between the cases of Classical Sanskrit and the
"karakas" or auxiliary actions in grammatical Sastric Sanskrit.  For
example, the dative case is "usually" mapped onto the semantic case
"recipient" but not always.  The exceptions make up the extension
from the commonly known language and the Sastra.
        An example is in order:

        "Caitra cooks rice in a pot" is expressed ordinarily in
Sanskrit as
        "Caitra: sthaalyaam taNDulam pacati" (double vowels indicate
length, capitals indicate retroflex)

        In Sastric Sanskrit:

        sthaliiniSTataNDulaniSTa:
        viklittijanakashcaitraabhinnaashrayako: vyaapaara:

        which translates into English:

        "There is an activity(vyaapaara:) , subsisting in the pot,
        with agency residing in one substratum not different from
        Caitra, which produces the softening which subsists in rice."

        The vocabulary is the same as in Classical Sanskrit, with
the addition of terms such as "none other than", and "not different
from".  Syntax is eliminated in the sense that the sentence is read
as "there is an abstract activity" with a series of "auxiliary
activities" which "agree" semantically with vyaapaara:.  Thus each
agreement here ends with ah: which indicates its SEMANTIC agreement with
the abstract activity.  What I am saying is that each "karaka" is
equivalent to a semantic net triple, which can be stored away as
eg. "activity, agent, none other than Caitra" etc.

        Thirdly, the first two points of O'Keefe's have been addressed.
Sanskrit is definitely Indo-European but its daughter languages
inherited the verbal roots(dhatus) not the methodology of its
grammarians.  Even though no other(that I know of) natural language
has found it worthwhile to pursue the developement of unambiguous
languages for a thousand years or so, one  parallel can be found:
recent work in natural language processing.  The difference is
that THEY used it in ordinary communication and AI techniques have
computer processing in mind.  Even though the language is dead there
are theoretical works which deal specifically with unambiguity.
After reading these, even though you may argue that ambiguity exists
(I'd like to see those arguments), you must concede that total
precision and an escape from syntax and ambiguity was a primary aim
of these scientists.  I find that interesting in itself.  It is
a possible indication that we do actually think "in semantic nets"
at some deep level.  Point e) again is a confusion with regular
Sanskrit.  The example of 4 people in a room A,B,C,D would not
be a problem in this language.  Since precision is required in
utterances(see the example above) one would simply not say
"we came from X", you would say "there was an activity connected
to a coming-activity, having as object X and having agency residing
in none other that (we 2, we 3 etc.)."  The number would have to
be specified.  "Blackbird" would be specified as either "a color-event
residing in a bird or "blackbird" would be taken as a primitive
nominal.

        Lastly, Jeff Elman's criticisms.  A comparison between
mathematics and Satsra is not a fair one.  Sastric texts have
been written in the domains of Science, Law, Mathematics, Archery,
Sex,Dance, Morality...  I wonder how these texts could be written
in mathematical formulisms; the Sastric language is, however,
beautifully and elegently suitable for these texts (Sastra means
basically "scientific").  I disagree with the statement that
"Surface ambiguity gives the language a flexibility of expression.
That flexibility does not necessarily entail lack of clarity."
Even if ambiguity adds flexibility I do not see how it follows
that clarity is maintained.  If there are 4 people in the room and
one says "we", that is less clear than the case where the language
necessitates saying we 3.  I also disagree with "...structural
ambiguity is not particularly bad nor incompatible with 'logical'
expression."  Certainly ambiguity is a major impediment to designing
an intelligent natural language processor.  It would be very desireable
to work with a language that allows natural flexibility without
ambiguity.  And I still maintain that the language is syntax free,
word order or no word order.  And maybe this is the linguistic
find of the century.

        One last point about metaphor, poetry etc.  As an example
to illustrate these capabilities in Sastric Sanskrit, consider
the "bahuvrihi" construct (literally "man with a lot of rice")
which is used currently in linguistics to describe references outside of
compunds.  "Bahuvrihi" is itself an example, literally "bahu"-many
"vrihi" rice.  Much rice is taken here as he who posesses a lot of
rice, and in Classical Sanskrit different case endings can make
"bahu-vrihi" mean "he or she who wants a lot of rice" , "is on a
lot of rice" etc.  Aha! Ambiguity?  Only in Classical, in Sastric
Sanskrit the use of semantic cases instead of syntactic do
not allow any ambiguity.

Rick

------------------------------

Date: 8 Oct 1984 11:10:37 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: Conference - Computational Linguistics Call for Papers


                        CALL FOR PAPERS

23rd Annual Meeting of the Association for Computational Linguistics

                         8-12 July 1985
                      University of Chicago
                        Chicago, Illinois


This international conference ranges over all of computational linguistics,
including understanding, generation, translation, syntax and parsing,
semantics, natural language interfaces, speech understanding and generation,
phonetics, discourse phenomena, office support systems, author assistance,
translation, and computational lexicons.  Its scope is intended to encompass
the contents of an Applied Natural Language Processing Conference as well as
one on Theoretical Issues in Natural Language Processing.  In short, we are
striving for comprehensiveness.

The meeting will include presented papers, system demonstrations, and, on
8 July, a program of computational linguistics tutorials.

Authors should submit, by 18 January 1985, 6 copies of an extended summary
(6 to 8 pages) to William C. Mann, ACL85 Program Chairman, USC/ISI,
4676 Admiralty Way, Marina del Rey, CA 90292, USA; (213)822-1511;
mann@isib.

The summaries should describe completed work rather than intended work, and
should indicate clearly the state of completion and validation of the
research reported, identify what is novel about it, and clarify its status
relative to prior reports.

Authors will be notified of acceptance by 8 March 1985.  Full length
versions of accepted papers prepared on model paper must be received,
along with a signed copyright release notice, by 26 April 1985.

All papers will be reviewed for general acceptability by one of
the two panels of the Program Committee identified below.  Authors
may designate their paper as either an Applications Paper or a
Theory Paper; undesignated papers will be distributed to one or
both panels.


Review Panel for Applications Papers:

Timothy Finin           University of Pennsylvania
Ralph Grishman          New York University
Beatrice Oshika         System Development Corporation
Gary Simons             Summer Institute of Linguistics
Jonathan Slocum         MCC Corporation

Review Panel for Theory Papers:

Robert Amsler           Bell Communications Research
Rusty Bobrow            Bolt Beranek and Newman
Daniel Chester          University of Delaware
Philip Cohen            SRI International
Ivan Sag                Stanford University


Those who wish to present demonstrations of commercial, developmental,
and research computer programs and equipment specific to computational
linguistics should contact Carole Hafner, College of Computer Science,
Northeastern University, 360 Huntington Avenue, Boston MA 02115, USA;
(617)437-5116 or (617)437-2462; hafner.northeastern@csnet-relay.  For
planning purposes, we would like this information as early as possible,
but certainly before 30 April.

Local arrangements will be handled by Martha Evens, Computer Science
Department, Illinois Institute of Technology, Chicago, IL 60616, USA;
(312)567-5153 or (312)869-8537; evens@sri-ai.

For other information on the conference, on the 8 July tutorials, and
on the ACL more generally, contact Don Walker (ACL), Bell Communications
Research, 445 South Street, Morristown, NJ 07960, USA; (201)829-4312;
bellcore!walker@berkeley.

Please note that the dates of the conference will allow people to
attend the National Computer Conference, which will be held in Chicago
the following week.

========================================================================

                        PLEASE POST

                        PLEASE REDISTRIBUTE

------------------------------

End of AIList Digest
********************

∂13-Oct-84  0045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #137    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Oct 84  00:45:37 PDT
Date: Fri 12 Oct 1984 23:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #137
To: AIList@SRI-AI


AIList Digest           Saturday, 13 Oct 1984     Volume 2 : Issue 137

Today's Topics:
  Fuzzy Logic - Query,
  AI Literature - The AI Report,
  AI Tools - OPS5 & LM-Prolog & VMS PSL 3.2,
  Lisp Machines - TI Explorers,
  Games - ACM Chess Tournament & Chess Planning,
  Seminar - Knowledge Based Software Development,
  Conference - AI Society of New England
----------------------------------------------------------------------

Date: 10 Oct 84 8:55:33-PDT (Wed)
From: hplabs!intelca!qantel!dual!fortune!polard @ Ucb-Vax.arpa
Subject: Fuzzy logic references wanted
Article-I.D.: fortune.4472

Would anyone be kind enough to send me (or post) a list of readings
that would serve as an introduction to fuzzy logic?

                        Thank you,
                        Henry Polard

Henry Polard (You bring the flames - I'll bring the marshmallows.)
{ihnp4,cbosgd,amd}!fortune!polard
N.B: The words in this posting do not necessarily express the opinions
of me, my employer, or any AI project.

------------------------------

Date: Thu 11 Oct 84 21:09:44-PDT
From: ROBINSON@SRI-AI.ARPA
Subject: Omission

Your list of AI information resources omits a significant
publication:

        The Artificial Intelligence Report

published by Artificial Intelligence Publications.

------------------------------

Date: 11 Oct 84 13:38:27 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Addendum to OPS5 list.

Some readers of this list pointed out a couple of omissions on the OPS5
summary posted a few days ago; thanks are due them for the additional
material.
A version of OPS5 called OPS5E, running on the Symbolics 3600 is available from
        Verac, Inc.
        10975 Torreyana Road, Suite 300
        San Diego, CA 92121
Prices: $3000 object code, $10000 source, $1000 one year support.
There is also a version for the Xerox D series machines (Dandelion, Dolphin,
Dorado) available from
        Science Applications International Corp.
        1200 Prospect St.
        P.O.Box 2351
        La Jolla, CA 92038
        (619) 454-3811
Price: $2000.

------------------------------

Date: 12 Oct 84 09:23 PDT
From: Kahn.pa@XEROX.ARPA
Subject: Re: LM-Prolog, Grammar Kit

My experiences using LM-Prolog have been very positive but I am surely
not an un-biased judge (being one of the co-authors of the system).   (I
am tempted to give a little ad for LM-Prolog here, but will refrain.
Interested parties can contact me directly.)

Regarding the Grammar Kit, the main thing that distinguishes it from
other DCGs is that it can continuously maintain a parse tree.  The tree
is drawn as parses are considered and parts of it disappear upon
backtracking.  I have found this kind of dynamic graphic display very
useful for explaining  Prolog and DCGs to people as well as debugging
specific grammars.

------------------------------

Date: Thu 11 Oct 84 07:16:44-MDT
From: Robert R. Kessler <KESSLER@UTAH-20.ARPA>
Subject: PSL 3.2 for Vax VMS

                        PSL 3.2 for Vax VMS

We are pleased to announce that Portable Standard LISP (PSL) version 3.2 is
now available for Vaxen running the VMS operating system.  PSL is about the
power, speed and flavor  of Franz LISP or  MACLISP, with growing  influence
from Common  LISP.  It  is recognized  as an  efficient and  portable  LISP
implementation with  many  more capabilities  than  described in  the  1979
Standard LISP Report.  PSL's main  strength is its portability across  many
different  systems,   including:   Vax  BSD   Unix,   Extended   Addressing
DecSystem-20 Tops-20, Apollo DOMAIN  Aegis, and HP  Series 200.  A  version
for the IBM-370 is in beta test, a Sun version is 90% complete and two Cray
versions are being used on an experimental basis.  Since PSL generates very
efficient code, it is an ideal delivery vehicle for LISP based applications
(we can  also provide  PSL reseller  licenses for  binary only  and  source
distributions).

PSL is distributed for the  various systems with executables, all  sources,
an approximately  500 page  manual and  release notes.   The release  notes
describe how to install the system and how to rebuild the various  modules.
We are charging  $750 for the  Vax/VMS version of  PSL for Commercial  Site
licenses.  Non-profit institutions and all  other versions of PSL will  not
be charged a license fee.  We are also charging a $250 tape or $350  floppy
distribution fee for each system.

PSL is in heavy use at Utah, and by collaborators at Hewlett-Packard, Rand,
Stanford, Columbia and over  200 other sites.   Many existing programs  and
applications have been  adapted to  PSL including  Hearn's REDUCE  computer
algebra system and GLISP, Novak's object oriented LISP dialect.  These  are
available from Hearn and Novak.

To obtain a copy of the license  and order form, please send a NET  message
or letter with your US MAIL address to:

Utah Symbolic Computation Group Secretary
University of Utah - Dept. of Computer Science
3160 Merrill Engineering Building
Salt Lake City, Utah 84112

ARPANET: CRUSE@UTAH-20
USENET:  utah-cs!cruse

------------------------------

Date: Thu, 11 Oct 84 03:52:11 pdt
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: TI Lisp Machines.

The recent article from PENTLAND@SRI-AI has some interesting benchmark
data.  I am looking seriously at Lisp Machines for purchase in the near
future, so I went around to the Xerox, Symbolics and LMI people at ACM
84.  I was told by the LMI folks that they were OEMs for the TI
machines.  (The machines do look almost identical.)  So I didn't chat
with the TI folks -- perhaps a mistake.  If LMI does OEM their machines
to TI, why the difference in performance?  Perhaps someone in the know
can clarify this.

If anyone out there with comparative experience in these various
machines can say a few words on what they think are the relative merits
of each vendor's product it would be quite helpful to prospective
buyers.  I came away with little substantive basis for comparison from
talking with the salesmen.  Most of them were high on pretension, low
on comprehension and quite adept at parrying questions.

As an incidental note, I found at the conference that Lisp and Prolog
are now available under PRIMOS on Prime computers.  A positive side-
effect of the increased interest in AI is the widening spectrum of
environments supporting AI languages, an important factor for soft-
ware producers looking for a wide market.

                                            Harry Weeks
                                            (Weeks@UCBpopuli)

P.S.
I just happened to read the latest Datamation today [10/11] and it
contains a news article which also provides some information on the
TI machines.

------------------------------

Date: Thu 11 Oct 84 23:52:54-PDT
From: PENTLAND@SRI-AI.ARPA
Subject: TI Lispm Timings - Clarification

Re: TI Lisp Machine timings

People have criticized me for the recently circulated comparison of TI
and Symbolics machines; mistaking the simple, rough timings I ran on
the TI and Symbolics machines for serious benchmarks.  I am surprised
that anyone thinks that benchmarking a machine can be a simple as the
comparison I did, which was limited by a need for extreme brevity.
I therefore want to make clear that the timings I ran were ROUGH, QUALITATIVE
measures of very limited portions of the machines performance, and
bear only a VERY ROUGH, ORDER-OF-MAGNITUDE RELATIONSHIP TO THE TRUE
PERFORMANCE of the machines.  That is, there is NO warranty of
accuracy for such simple tests.  Serious benchmarking has yet to be
done.
        Alex Pentland

------------------------------

Date: Fri 12 Oct 84 16:49:27-CDT
From: CMP.BARC@UTEXAS-20.ARPA
Subject: TI Explorers for MIT

Mike Green of Symbolics told us that MIT's "multi-million-dollar order" is
essentially a gift from TI to MIT.  He said that MIT has confirmed this.
Apparently, TI is donating 200 machines to MIT and giving them the option to
buy another 200 at $28K each over the next two years.  However, TI is working
to get DARPA to pay for the second 200!  If this is true, I just may "order"
a few hundred myself.

Dallas Webster
CMP.BARC@UTexas-20.ARPA

------------------------------

Date: 11 Oct 84 12:06:18 EDT
From: Feng-Hsiung.Hsu@CMU-CS-VLSI
Subject: ACM Chess Tournament

           [Forwarded from the CMUC bboard by Laws@SRI-AI.]

The following message was posted on usenet:

     The standings follow.  Ties were broken by considering the sum of
the opponents' scores.  Since 'Bebe' and 'Fidelity X' deadlocked here, the
sum of the opponents' opponents' scores were tallied.  Deadheat again, so
by fiat, Fidelity walked home with the second place trophy, as Bebe finished
second at ACM '83.  (At least, I think this is what happened, the groggy
hardcore disbanding at 1 am).

     There were surprises, including a disappointing showing by Belle.
I shall leave game commentary to the experts.  Mike Valvo and Danny Kopec
emceed the fourth round, and several other masters were in attendance,
including former World Juniors champ Julio Kaplan.

     Blitz was running on a 420 MIP four-barrel Cray XMP-48, computing
100K nodes per second (Belle does 160K).  Bebe is a custom bit-slice micro,
with hardware assist for various functions.  Fidelity is a commercial 6 MHz
6502, and International Software Experimental is David Levy's Apple II.

        Cray Blitz      2150        4
        Fidelity X      1900        3
        Bebe            1927        3
        Chaos           1714        3
        Belle           2200        2.5
        Nuchess         2100        2
        Phoenix         1910        2
        Novag X         1970        2
        Int. Soft. X    2022 (est)  2
        Schach 2.7      N/A         1.5
        Ostrich         1475        1
        Awit            1600        1
        Merlin          N/A         1
        Xenarbor        N/A         0

------------------------------

Date: Tue, 9 Oct 84 21:07:34 edt
From: krovetz@nlm-mcs (Bob Krovetz)
Subject: chess and planning

A very nice paper on a program that uses planning in making chess
moves is:

 "Using Patterns and Plans in Chess", Dave Wilkins, Artificial
  Intelligence, Vol. 14, 1980.

The program is called PARADISE, and has found a mate that was 19 ply
deep!


-Bob (Krovetz@NLM-MCS)

------------------------------

Date: 11 Oct 1984 1306-EDT
From: Scott Dietzen <DIETZEN@CMU-CS-C.ARPA>
Subject: Seminar - Knowledge Based Software Development

           [Forwarded from the CMUC bboard by Laws@SRI-AI.]

                           Friday, October 12
                           2:00 PM in Wean 5409

              Knowledge Based Software Development in FSD
                          Robert Balzer
                             USC/ISI


        Our group is persuing the goal of an automation based software
development paradigm.  While this goal is still distant, we have embeded
our current perceptions and capabilities in a prototype (FSD) of such a
software development environment.  Although this prototype was built
primarily as a testbed for our ideas, we decided to gain insight by
using it, and have added some administrative services to expand it from
a programming system to a computing environment currently being used by
a few ISI researchers for all their computing activities.  This "AI
operating system" provides specification capabilities for Search,
Coordination, Automation, Evolution and Inter-User Interaction.

        Particularly important is evolution, as we recognize that useful
systems can only arise, and remain viable, through continued evolution.
Much of our research is focused on this issue and several examples will
be used to characterize where we are today and where we are headed.
Naturally, we have started to use these facilities to evolve our system
itself.

------------------------------

Date: Thu, 11 Oct 84 17:43:05 edt
From: Douglas Stumberger <des%bostonu.csnet@csnet-relay.arpa>
Subject: Conference - AI Society of New England


               The Sixth Annual Conference of the
         Artificial Intelligence Society of New England

                        Oct. 26-27, 1984


It is time once again for our legendary annual AISNE meeting!  In
keeping  with our time-honored tradition, we will have an invited
speaker for Friday night, with panel  discussions  and  talks  by
students on Saturday.

Accommodations on Friday night will be informal. Bring a sleeping
bag,  and we can find you a place to stay. If you want us to find
you a place, tell Doug Stumberger at Boston University  how  many
bodies  you  have.  Note: If you have a faculty representative at
your institution, they can pass this information on to  Doug  for
you in order to minimize long distance phone calls. (If you don't
know who your faculty rep. is, it's probably the person who  dis-
tributed  this  announcement.)  There is no admission charge, and
no formal registration necessary, though if you need informal ac-
comodations for Friday night, please let Doug know.


The event will be held at:

                 Department of Computer Science
                        Boston University
                      111 Cumington Street
                           Boston, MA

The Program is:

                         Friday, Oct. 26

8:00 pm. Invited Talk by David Waltz (Brandeis University)
         "Massively Parallel Models and Hardware for AI"

9:00 pm. Libational Social Hour

                       Saturday, Oct. 27:

10:00 am. Panel discussion chaired by Elliot Soloway (Yale)
                 "Intelligent Tutoring Systems"

11:30 am. Talks on Academic Research Projects (15 min. each)

12:30 pm. Lunch

2:00 pm. Panel discussion chaired by Michael  Lebowitz  (Columbia U.)
                    "Natural Language - What Matters?"

3:30 pm. More Talks

4:30 pm. AISNE Business Meeting


Program Coordinator:                     Local Coordinator:

Wendy Lehnert                           Douglas Stumberger
COINS                                   Department of Computer Science
University of Massachusetts             111 Cumington Street
Amherst, MA 01003                       Boston, MA 02215
413-545-3639                            617-353-8919

csnet: lehnert@umass-cs                 csnet: des@bostonu
                                        bitnet: csc10304@bostonu

------------------------------

End of AIList Digest
********************

∂14-Oct-84  2048	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #138    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 Oct 84  20:48:24 PDT
Date: Sun 14 Oct 1984 19:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #138
To: AIList@SRI-AI


AIList Digest            Monday, 15 Oct 1984      Volume 2 : Issue 138

Today's Topics:
  Metadiscussion - Citing AIList,
  AI - Definition,
  Linguistics - Mailing List & Sastric Sanskrit & Language Evolution,
  Conference - SCAIS
----------------------------------------------------------------------

Date: 14 Oct 84 19:56:17 EDT
From: Allen <Lutins@RU-BLUE.ARPA>
Subject: AILIST as a source of info....


Many recent AILIST discussions have fascinated me, and I'm sure that
at some point in the near future I'll be using information presented
here for a paper or two.  Just exactly how do I credit an electronic
bboard in a research paper?  And who (i.e. moderator, author of
info, etc.) do I give credit to?
                                        -Allen LUTINS@RU-BLUE


[Certainly the author must be credited.  I am indifferent as to
whether AIList is mentioned since I consider the digest just a
communication channel by which authors circulate their unpublished
ideas.  (You wouldn't cite Ma Bell or your Xerox copier.)  This
viewpoint is intended to avoid copyright difficulties.  On the
other hand, a reference to AIList might help someone look up the
full context of a discussion.  Does any librarian out there know
a good citation form for obscure newsletters, etc., that the
reader could not be expected to track down by name alone?  -- KIL]

------------------------------

Date: 14 Oct 84 14:49:51 EDT
From: McCord @ DCA-EMS
Subject: Model for AI Applications


Since the beginning, some intelligence, albeit explicit and highly
focused, has been built into nearly every program written.  This is
obviously not the "artificial" intelligence we now talk, market, and
sell.  Surely, to be worthy of the title "artificial" intelligence,
an AI application must exhibit some minimum characteristics such as
a specified level of control over its environment, the ability to learn,
and its transportability or adaptability to related applications.
Has anyone developed a model of an AI application that may be used to
discriminate between "programs" and "artificial" intelligence?

Also, does anyone have any comments on Dr. Frederick Brook's (of
The Mythical Man-Month fame) pragmatitic approach ("Intelligence
Amplification (IA) Is Better Than Artificial Intelligence (AI)")
to AI?

------------------------------

Date: Fri, 12 Oct 84 17:09:29 edt
From: Douglas Stumberger <des%bostonu.csnet@csnet-relay.arpa>
Subject: natural language mailing list


        Does anyone know of a mailing list devoted solely to
linguistics/computational linguistics?

douglas stumberger
csnet: des@bostonu
bitnet: csc10304@bostonu

------------------------------

Date: Thu 11 Oct 84 18:15:59-MDT
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Sastric Sanskrit

Coming from India and having learnt a little bit of Sanskrit, let me make a
few comments to add to Rick Briggs's claims.  I do not know for a fact if
Sastric Sanskrit is unambiguous.  In fact, I have not heard of it before.
But, its unambiguity seems plausible.

First of all, as to the history of Sanskrit.  It is an Indo-European
language but it has an independent line of development from all the
languages spoken outside the Indian subcontinent, i.e., all its daughters
are spoken, to the best of my knowledge, only in the subcontinent.  Not
only its dhatus but its methodologies have been inherited by its daughters.
Even the Dravidian languages (the other family of languages spoken in the
subcontinent which are not daughters of Sanskrit) have been influenced by
its methodologies.  For example, the first formal grammar of my own mother
tongue, which is not born of Sanskrit, was written in Sanskrit
Panini-style.

Strictly speaking, neither Sanskrit nor its daughters have a word order.
The sophisticated case system makes it possible to communicate without word
order.  The subject and object are identifiable from their own cases
independent of their position in a sentence.  Incidentally, the cases are
merely a convenience.  The prepositions (which become suffixes in Sanskrit
and its daughters) serve the same purpose, though they are more verbose.
However, the role of various words in a sentence is not always
independently identifiable.  This leads to ambiguity rather than
unambiguity.  Kiparsky's example
        "rajna bhikshuna bhavitavyam"
has BOTH the meanings
        "the beggar will have to become the king"
and
        "the king will have to become the king"
The latter meaning is normally understood, because it interprets the
sentence in the word order "subject-object-verb" which is the most
frequently used.  This kind of unambiguity is more of an exception than
the standard.  I would say it occurs not more than 5% of the time in normal
prose.  It is resolved by resorting to the "natural" word order.

Sastric Sanskrit is a subset of normal Sanskrit, i.e., every sentence of
Sastric Sanskrit is also a sentence of normal Sanskrit.  This also means
that Sastric Sanskrit did not evolve naturally on its own, but was the
result of probably hundreds of years of research to eliminate ambiguity in
communication.  It should be possible for the initiated and knowledgeable
to dig up the research that went into the development of this subset.

What seems to be important is whether an unambiguous subset of a language
can be formed by merely imposing rules on how sentences can be formed.  I
am not convinced about that, but I cannot also say it is impossible.
Ancient Indian scholars had a curious mixture of dogma and reason.  One
cannot take their claims at their face value.

If an unambiguous subset of Sanskrit could be developed, it should also be
possible for all the languages.  What is special about Sanskrit is that the
redundancy needed to disambiguate the language could be added in Sanskrit
without substantial loss of convenience.  In English, adding this
redundancy leads to a lot of awkwardness, as Briggs's examples exemplify.

Uday Reddy

------------------------------

Date: 12 Oct 84 09:30 PDT
From: Kahn.pa@XEROX.ARPA
Subject: Language Evolution

This discussion of Sanskrit leads me to ask the question of why
languages have evolved  the way they have.  Why have they moved away
from case?  Generalizing from the only example I know of (Old Norse to
Modern Swedish) I wonder why distinctions that seem useful have
disappeared.
For example, Old Norse had singular, plural, and dual (when two people
were involved).  Why would such a distinction come into a language and
then disappear hundreds of years later.  Why did Sastric Sanskrit die?

[Otto Jesperson (1860-1943), famous Danish linguist, studied such matters
at a time when classical Greek and Latin were very much in vogue and
modern languages with few cases, genders, tenses, and moods were
considered retrogressive.  He held the opposed view that English and
Chinese were the most advanced languages, and that the superiority
of modern languages stems from seven characteristics:

  1) Shorter forms, easier and faster to speak.  The Gospel of St.
     Matthew contains 39k syllables in Greek, 33k in German, 29k
     in English.

  2) Fewer forms to burden memory.  Gothic habaida, habaides,
     habaidedu, and 12 other forms map to just "had" in English.

  3) Regular formation of words.

  4) Regular syntactic use of words.

  5) Flexible combinations and constuctions.  Danish "enten du
     cller jeg har uret" is straightforward, whereas the inflected
     "either you or I am wrong" or "either you are wrong, or I"
     is awkward.

  6) Lack of repetitious concord.  Latin "opera virorum omnium
     bonorum veterum" expresses plurality four times, genitive
     case four times, and masculine gender twice; the English
     "all good old men's works" has no such repetition.

  7) Ambiguity is eliminated through regular word order.

Jesperson designed his own artificial language, Novial, after working
on a modified (and never adopted) form of Esperanto called Ido.

For more information, see Peter Naur's Programming Languages, Natural
Languages, and Mathematics in the December 1975 issue of Communications
of the ACM.  -- KIL]

------------------------------

Date: Sat, 13 Oct 84 13:39:03 PDT
From: Southern California AI Society <scais@UCLA-LOCUS.ARPA>
Subject: Conference - SCAIS

I noticed the announcement of AISNE on AIList.  Since SCAIS is
inspired by AISNE,  it seems appropriate to announce it in
AIList also.   Here goes:

************************************************************************
                     1ST MEETING OF SCAIS

SCAIS -- Southern California Artificial Intelligence Society
         (Pronounced "skies".)

The purpose of SCAIS is to help create an AI community spirit  among  AI
researchers  and research labs in the Southern California area.  (As far
south as San Diego and as far  north  as  Santa  Barbara,  but  probably
concentrated in the greater LA area.)

SCAIS is inspired by AISNE (AI Society of New England).  AISNE meets  at
least  once  a year, at locations such as Yale, MIT, UMass, Stoneybrook,
etc. in the New England area.  (See prior AIList announcement of AISNE.)

Our  first  SCAIS meeting is intended to give everyone an opportunity to
meet other active AI researchers and  graduate  students  in  the  area.
Short  talks  on  research projects will be given by students and AI lab
project leaders, who will describe what AI research is going on  in  the
area.   In  addition,  we  hope  to  generate a list of the names, phone
numbers, net mailing addresses, and research interests of the attendees.
If  our  first SCAIS meeting is successful, future meetings will then be
held on a periodic basis at different sites.

SCAIS  is  intended  for serious AI researchers and graduate AI students
who reside in S.  Calif., who are working  in  the  field  and  who  are
interested  in learning about the research of others in the 'greater' LA
area.  SCAIS is NOT intended as a forum for industrial recruiting or for
interested  on-lookers.   Attendance  at  our  first  SCAIS  meeting  is
expected to be 100-150 people and is by invitation only.

AI researchers in the S.  Calif.  area  can  request  an  invitation  by
contacting:  SCAIS-REQUEST@UCLA-CS.ARPA or SCAIS-REQUEST@UCLA-LOCUS.ARPA
(or ...!ucla-cs!scais-request on uucp).  You should include  your  name,
affiliation, address, net-address, phone number, and research area.
************************************************************************

    (almost complete) AGENDA of 1st SCAIS Conference

(Oct 29, 8:00am-7:00pm, California Room, UCLA Faculty Center)

8:00 - 8:30    Morning OPEN HOUSE at UCLA AI Lab & Demos
8:30 - 8:40    Michael Dyer -- Welcome and Overview of UCLA AI
8:40 - 10:15   SESSION #1

     UCLA  (75 min)
     ==============
     Sergio Alvarado (stu) -- "Comprehension of Editorial Text"
     Uri Zernik (stu)    --  "Adult Language Learning"
     Erik Mueller (stu)  --  "Daydreaming and Story Invention"
     Charlie Dolan (stu)  --  "Reminding and Analogy"
     Judea Pearl -- "Learning Hidden Causes from Raw Data"
     Ingrid Zuckerman (stu) -- "Listener Model for Generation
                             of Meta-Technical Utterances in Math Tutoring"
     Rina Dechter (stu) -- "Mechanical Generation of Heuristics for
                            Constraint-Satisfaction Problems.
     Tulin Mangir -- "Applications of Expert Systems to CAD and CAT of VLSI"
     Vidal -- "Reconfigurable Logic Knowledge Representation and
               Architectures for Compact Expert Systems"

     Aerospace Corp. (20 min)
     ========================
     Steve Crocker  --  "Overview"
     Paul Mazaika -- "False Event Elimination"
     Ann Brindle  --  "Automated Satellite Control"
     John Helly -- "Representational Basis for A Distributed Expert System"
     *Break*  (coffee & danish)  10:15 - 10:30

10:30 - 11:50  SESSION #2

     UC Irvine (60 min)
     ==================
     Pat Langley -- "Overview of UCI AI Research"
     Rogers Hall (stu) -- "Learning in Multiple Knowledge Sources"
     Student (w/ Rick Granger) --  " NEED TITLE "

     IBM (20 min)
     ============
     John Kepler -- "Overview of IBM Scientific Center Activities in AI"
     Gary Silverman -- "The Robotics Project"
     Alexander Hurwitz -- "Intelligent Help for Computer Systems"

11:50 - 1:10   LUNCH  (Sequoia Rooms 1,2,3 in Faculty Center)
1:10 - 2:40    SESSION #3

     USC/ISI (90 min)
     ================
     Kashif Chaudhry (stu) -- "The Advance Robot Programming Project"
     Shari Naberschnig (stu) -- "The Distributed Problem Solving Project"
     Yigal Arens  --   "Natural Language Understanding Research at USC"
     Ram Nevatia  --   "Overview of Computer Vision Research at USC"
     Dan Moldovan --  "Parallel Processing in AI"

     Jack Mostow -- "Machine Learning Research at ISI"
     Bill Mann -- "Natural Language Generation Research at ISI"
     Norm Sondheimer -- "Natural Language Interface Research at ISI"
     Tom Kaczmarek -- "Intelligent Computing Environment Research at ISI"
     Bob Neches -- "Expert Systems Research at ISI"
     Bob Balzer -- "Specification-Based Programming Research at ISI"
     *Break*        2:40 - 2:55    (coffee & punch)

3:00 - 4:20    SESSION #4

     Hughes AI Center (20 min)
     =========================
     D. Y. Tseng -- "Overview of HAIC Activities"

     JPL (20 min)
     ====================
     Steven Vere -- "Temporal Planning"
     Armin Haken -- "Procedural Knowledge Sponge"
     Len Friedman   "Diagnostics and Error Recovery"

     TRW (20 min)
     ============
     Ed Taylor -- "AI at TRW"

     Rand Corp (20 min)
     ==================
     Phil Klahr     --  "Overview of Rand's AI Research"
                        "AI in Simulation"
     Henry Sowizral --  "Time Warp"
                        "ROSIE: An Expert System Language"
     Don Waterman   --  "Explanation for Expert Systems"
                        "Legal Reasoning"
     Randy Steeb    --  "Cooperative Intelligent Systems"
     *Break*        4:20 - 4:40  (coffee & punch)

4:40 - 6:00    SESSION #5

     UC San Diego  (20 min)
     ======================
       Paul Smolensky -- "Parallel Computation:  The Brain and AI"
       Paul Munro -- " Self-organization and the Single Unit:  Learning
                       at the Neuronal Level"

     SDC  (20 min)
     =============
       Dan Kogan -- "Intelligent Access to Distributed Data Management"
       Robert MacGregor -- "Logic-Based Knowledge Management System"
       Beatrice T. Oshika -- "User Interfaces:  Speech and Nat. Lang."

     Cal State, Fullerton (10 min)
     =============================
     Arthur Graesser -- "Symbolic Procedures of Question Answering"

     Rockwell Science Center (5 min)
     ===============================
     William Pardee -- "A Heuristic Factory Scheduling System"

     General Research Corp (5 min)
     =============================
     Jim Kornell -- "Analogical Inferencing"

     Northrup  (5 min)
     =================
     Steve Lukasis   -- 'NEED TITLE"

     Aerojet (5 min)
     ===============
     Ben Peake  --  "NEED TITLE"

     Litton (5 min)
     ==============
     speaker  -- "NEED TITLE"

     Logicon (5 min)
     ===============
     John Burge -- "Knowledge Engineering at Logicon"


6:00 - 7:00    GENERAL MEETING OF SCAIS MEMBERS

     SCAIS Panel & General Meeting
          possible themes:
               * Assessment - Where from here?
               * State of AI in S. Calif.
               * Organization of SCAIS
               * Future Hosting
               * Univ - Industry connections
               * Software - hardware community sharing
               * Arrival of IJCAI-85 in LA
               * LA AI Consortium/Institute ???

7:00 - 7:30    Evening OPEN HOUSE at UCLA AI Lab & Demos
               (3677 Boelter Hall)

> 7:30pm       Interested parties may form groups and dine
               at various restaurants in Westwood Village

------------------------------

End of AIList Digest
********************

∂17-Oct-84  1249	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #139    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Oct 84  12:47:36 PDT
Date: Wed 17 Oct 1984 10:55-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #139
To: AIList@SRI-AI


AIList Digest           Wednesday, 17 Oct 1984    Volume 2 : Issue 139

Today's Topics:
  Seminars - Monotonic Processes in Language Processing
    & Qualitative Analysis of MOS Circuits
    & Knowledge Retrieval as Specialized Inference
    & Juno Graphics Constraint Language
    & PECAN Program Development System
    & Aesthetic Experience,
  Symposium - Complexity of Approximately Solved Problems,
  Course - Form and Meaning of English Intonation
----------------------------------------------------------------------

Date: Wed, 10 Oct 84 16:09:03 pdt
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Seminar - Monotonic Processes in Language Processing

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984

           Cognitive Science Seminar -- IDS 237A

    TIME:                Tuesday, October 16, 11 - 12:30
    PLACE:               240 Bechtel Engineering Center
    DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Martin Kay, Xerox Palo Alto Research Center;
                Center  for the Study of Language and Infor-
                mation, Stanford University

TITLE:          Monotonic Processes in Language Processing

ABSTRACT:       Computation  proceeds  by  manipulating  the
                associations  between  (variable)  names and
                values  in  accordance  with  a  program  of
                rules.  If an association, once established,
                is never changed,  then  the  process  as  a
                whole is monotonic.  More intuitively, mono-
                tonic processes can add arbitrary amounts of
                detail  to  an  existing  picture so long as
                they never change  what  is  already  there.
                Monotonic  processes underlie several recent
                proposals in linguistic theory  (e.g.  GPSG,
                LFG  and  autosegmental  phonology)  and  in
                artificial intelligence (logic programming).
                I  shall  argue  for seeking monotonic solu-
                tions to linguistic problems wherever possi-
                ble  while  rejecting  some  arguments  fre-
                quently made for the policy.

------------------------------

Date: 15 Oct 1984  11:17 EDT (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Qualitative Analysis of MOS Circuits

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


Wednesday   October 17, 1984    4:00pm  8th floor playroom

Brian C. Williams
Qualitative Analysis of MOS Circuits

With the push towards sub-micron technology, transistor models have
become increasingly complex.  The number of components in
integrated circuits has forced designers' efforts and skills towards
higher levels of design.  This has created a gap between design
expertise and the performance demands increasingly imposed by the
technology.  To alleviate this problem, software tools must be developed
that provide the designer with expert advice on circuit performance and
design.  This requires a theory that links the intuitions of an expert
circuit analyst with the corresponding principles of formal theory (i.e.,
algebra, calculus, feedback analysis, network theory, and
electrodynamics), and that makes each underlying assumption explicit.

Temporal Qualitative Analysis is a technique for analyzing the
qualitative large signal behavior of MOS circuits that straddle the line
between the digital and analog domains.
Temporal Qualitative Analysis is based on the
following four components:  First, a qualitative representation is
composed of a set of open regions separated by boundaries.  These
boundaries are chosen at the appropriate level of detail for the
analysis.  This concept is used in modeling time, space, circuit state
variables, and device operating regions.  Second, constraints between
circuit state variables are established by circuit theory.  At a finer
time scale, the designer's intuition of electrodynamics is used to
impose a causal relationship among these constraints.  Third, large
signal behavior is modeled by Transition Analysis, using continuity and
theorems of calculus to determine how quantities pass between
regions over time.  Finally, Feedback Analysis uses
knowledge about the structure of equations and the properties of
structure classes to resolve ambiguities.

------------------------------

Date: 1 Oct 1984 13:27-EDT
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Knowledge Retrieval as Specialized Inference

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


On Thursday, October 11th, at 10:30 a.m., Alan Frisch, from the
Cognitive Studies Programme, University of Sussex, Brighton, England
and from the Department of Computer Science,  University of Rochester,
Rochester, New York, will speak at the 3rd floor large conference room
at BBN, 10 Moulton Street in Cambridge.

         Knowledge Retrieval as Specialized Inference

  Artificial intelligence reasoning systems commonly employ a
  knowledge  base module that stores a set of facts expressed
  in a representation  language  and provides  facilities  to
  retrieve  these  facts.  Though  there  has  been a growing
  concern  for  formalization  in  the  study   of  knowledge
  representation,  little  has  been  done  to  formalize the
  retrieval process.  This research remedies the situation in
  its  study  of  retrieval  from  abstract  specification to
  implementation.

  Viewing retrieval as a highly specialized inference  process
  that attempts to derive a queried fact from the set of facts
  in the knowledge base enables techniques of formal logic  to
  be  used  in  abstract  specifications.   This talk develops
  alternative specifications for an idealized version  of  the
  retriever incorporated in the ARGOT natural language system,
  shows how  the  specifications  capture  certain  intuitions
  about  retrieval,  and uses the specifications to prove that
  the retriever  has  certain  properties.   A  discussion  of
  implementation  issues  considers an inference method useful
  in both retrieval and logic programming.

------------------------------

Date: 15 October 1984 1240-EDT
From: Staci Quackenbush at CMU-CS-A
Subject: Seminar - Juno Graphics Constraint Language

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Name:   Greg Nelson
Date:   October 22, 1984
Time:   3:30 - 4:30 p.m.
Place:  WeH 5409
Title:  "An Overview of Juno"


Connect a computer to a marking engine, and you have a drawing instrument
of unprecedented precision and versatility.  Already some graphics
artists have given up their T-squares and pens for the new world of raster
displays, pointing devices, and laser printers.  But they face a serious
difficulty: to exploit the power and generality of the computer requires
programming.  We can't remove this difficulty, but we can smooth it by
programming in the language of the geometry of images rather than in the
low-level language of some particular representation for images.

These considerations led to the design of Juno, an interactive and
programmable graphics system.  The first basic principle of Juno's design
is that geometric constraints be the mechanism for specifying locations.
For example, a Juno program might specify that points A, B, and C be
collinear and that the distance from A to B equal the distance from
B to C; the interpreter will solve these constraints by numerical methods.
The second principle of the design is that the text of a Juno program be
responsive to the interactive editing of the image that the program produces.
For example, to create a program to draw an equilateral triangle, you don't
type a word: you draw a triangle on the display, constrain it to be
equilateral, and command Juno to extract the underlying program.

------------------------------

Date: Tue 16 Oct 84 09:46:34-PDT
From: Susan Gere <M.SUSAN@SU-SIERRA.ARPA>
Subject: Seminar - PECAN Program Development System

        EE380/CS310 Computer Systems Laboratory Seminar

Time:  Wednesday, October 17,  4:15 p.m.
Place:  Terman Auditorium

Title:  PECAN: Program Development Systems that Support Multiple Views

Speaker:  Prof. Steven Reiss,  C.S.D. Brown University


This talk describes the PECAN family of program development systems.
PECAN is a generator that is based on simple description of the
underlying language and its semantics.  Program development systems
generated by PECAN support multiple views of the user's program.  The
views can be representations of the program, its semantics and its
execution.  The current program views include a syntax-directed
editor, a Nassi-Schneiderman flow graph, and a declaration editor.
The current semantic views include expression trees, data type
diagrams, flow graphs, and the symbol table.  Execution views include
the interpreter control and a stack and data view.  PECAN is designed
to make effective use of powerful personal machines with
high-resolution graphics displays, and is currently implemented on
APOLLO workstations.

------------------------------

Date: Tue, 16 Oct 84 16:56:22 pdt
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Aesthetic Experience

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

    TIME:                Tuesday, October 23, 11 - 12:30
    PLACE:               240 Bechtel Engineering Center
    DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Thomas  G.  Bever,  Psychology   Department,
                Columbia University

TITLE:          The Psychological basis of aesthetic experi-
                ence:  implications for linguistic nativism

ABSTRACT:       We define the notion of Aesthetic Experience
                as   a   formal   relation   between  mental
                representations:   an  aesthetic  experience
                involves  at least two conflicting represen-
                tations that are  resolved  by  accessing  a
                third  representation.   Accessing the third
                representation releases  the  same  kind  of
                emotional  energy as the 'aha' elation asso-
                ciated with discovering the  solution  to  a
                problem. We show how this definition applies
                to  various  artforms,  music,   literature,
                dance.   The  fundamental aesthetic relation
                is similar to the  mental  activities  of  a
                child  during  normal cognitive development.
                These considerations explain the function of
                aesthetic  experience:  it elicits in adult-
                hood the characteristic mental  activity  of
                normal childhood.

                The fundamental activity revealed by consid-
                ering the formal nature of aesthetic experi-
                ence involves developing  and  interrelating
                mental  representations.   If  we  take this
                capacity  to  be  innate  (which  we  surely
                must),   the question then arises whether we
                can account for the phenomena that are  usu-
                ally argued to show the unique innateness of
                language as a mental organ.  These phenomena
                include  the  emergence of a psychologically
                real grammar,  a critical  period,  cerebral
                asymmetries.     More    formal   linguistic
                properties may be accounted for as partially
                uncaused (necessary) and partially caused by
                general  properties  of  animal  mind.   The
                aspects  of  language  that may remain unex-
                plained (and therefore non-trivially innate)
                are  the  forms of the levels of representa-
                tion.

------------------------------

Date: Mon 15 Oct 84 11:32:31-EDT
From: Delores Ng <NG@COLUMBIA-20.ARPA>
Subject: Symposium - Complexity of Approximately Solved Problems

       SYMPOSIUM ON THE COMPLEXITY OF APPROXIMATELY SOLVED PROBLEMS


                             APRIL 17-19, 1985


                        Computer Science Department
                            Columbia University
                            New York, NY  10027


SUPPORT:  This symposium is supported by a grant from the System Development
Foundation.

SCOPE:  This multidisciplinary symposium focuses on problems which are
approximately solved and for which optimal algorithms or complexity results
are available.  Of particular interest are distributed systems, where
limitations on information flow can cause uncertainty in the solution
of problems.  The following is a partial list of topics: distributed
computation, approximate solution of hard problems, applied mathematics,
signal processing, numerical analysis, computer vision, remote sensing,
fusion of information, prediction, estimation, control, decision theory,
mathematical economics, optimal recovery, seismology, information theory,
design of experiments, stochastic scheduling.

INVITED SPEAKERS: The following is a list of invited speakers.

L. BLUM, Mills College                  C.H. PAPADIMITRIOU, Stanford University
J. HALPERN, IBM                         J. PEARL, UCLA
L. HURWICZ, University of Minnesota     M. RABIN, Harvard University and
                                                  Hebrew University
D. JOHNSON, AT&T - Bell Laboratories    S. REITER, Northwestern University
J. KADANE, Carnegie-Mellon University   A. SCHONHAGE, University of Tubingen
R. KARP, Berkeley                       K. SIKORSKI, Columbia University
S. KIRKPATRICK, IBM                     S. SMALE, Berkeley
K. KO, University of Houston            J.F. TRAUB, Columbia University
H.T. KUNG, Carnegie-Mellon University   G. WASILKOWSKI, Columbia University and
                                                        University of Warsaw
D. LEE, Columbia University             A.G. WERSCHULZ, Fordham University
M. MILANESE, Politecnico di Torino      H. WOZNIAKOWSKI, Columbia University
                                                     and University of Warsaw


CONTRIBUTED PAPERS:  All appropriate papers for which abstracts are contributed
will be scheduled.  To contribute a paper send title, author, affiliation, and
abstract on one side of a single 8 1/2 by 11 sheet of paper.


         TITLES AND ABSTRACTS MUST BE RECEIVED BY JANUARY 15, 1985


PUBLICATION:  All invited papers will appear in a new journal, JOURNAL OF
COMPLEXITY, published by Academic Press, in fall 1985.

REGISTRATION:  The symposium will be held in the Kellogg Conference Center on
the Fifteenth Floor of the International Affairs Building, 118th Street and
Amsterdam Avenue.  The conference schedule and paper abstracts will be
available at the registration desk.  Registration will start at 9:00 a.m.
There is no registration charge.

FOR FURTHER INFORMATION:  The program schedule for invited and contributed
papers will be mailed by about March 15 only to those responding to this
account with the information requested below.  If you have any questions,
contact the Computer Science Department, Columbia University, or call
(212) 280-2736.


To help us plan for the symposium please reply to this account with the
following information.


Name:                                   Affiliation:

Address:


 ( )  I will attend the Complexity Symposium.
 ( )  I may contribute a paper.
 ( )  I may not attend, but please send program.

------------------------------

Date: Mon 15 Oct 84 22:43:20-PDT
From: Bill Poser <POSER@SU-CSLI.ARPA>
Subject: Course - Form and Meaning of English Intonation


                        COURSE ANNOUNCEMENT

        Mark Liberman and Janet Pierrehumbert of AT&T Bell Laboratories
will give a course sponsored by the Linguistics Department and the Center
for the Study of Language and Information entitled:


                FORM AND MEANING OF ENGLISH INTONATION


Place: Seminar Room, CSLI, Stanford University
Dates: Monday 5 November - Saturday 17 November
Hours: MWF      16:30-18:00
       TTh      16:30-18:00 & 19:30-21:30
       Sat      10:00-12:30 & 14:00-17:00


A brief description follows:

(1) What

Participants will learn to describe and interpret the stress, tune and
phrasing of English utterances, using a set of systematically arranged
examples, given in the form of transcripts, tapes and pitch contours.
The class will also make use of an interactive real-time pitch detection
and display device.

We will provide a theory of English intonation patterns and their
phonetic interpretation, in the form of an algorithm for generating
synthetic F0 contours from underlying phonological representations.
We will investigate the relation of these patterns to the form, meaning
and use of the spoken sentences that bear them, paying special attention to
intonational focus and intonational phrasing.

Problem sets will develop or polish participants' skills in the exploration
of experimental results and the design of experiments.

(2) Who

No particular background knowledge will be presupposed, although
participants will have to acquire (if they do not already have) at least
a passive grasp of many technical terms and concepts. Thus, it will
be helpful to have had experience (for instance) with at least some of
the terms "hertz" (not the car company), "fricative," "copula," "lambda
abstraction," "gradient vector." Several kinds of people, from engineers
through linguists and psychologists to philosophers, should find the course's
contents interesting. However, we will angle the course towards participants
who want to study the meaning and use of intonation patterns, and we hope
that a significant fraction of the course will turn into a workshop on this
topic.

(3) Registration

Pre-registration is not mandatory, but if you expect to attend
it would be helpful if you would let Bill Poser (poser@su-csli) know.

Stanford students wishing to take the course for credit may enroll
for a directed reading with Paul Kiparsky or Bill Poser.

------------------------------

End of AIList Digest
********************

∂18-Oct-84  0000	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #140    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Oct 84  23:59:31 PDT
Date: Wed 17 Oct 1984 22:29-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #140
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Oct 1984     Volume 2 : Issue 140

Today's Topics:
  Applications - Agriculture & Biofeedback,
  AI Tools - InterLisp-D DBMS & OPS5 & OPS5E & Verac & Benchmarks,
  Law - Liability of Software Vendors,
  Metadiscussion - List Citations
----------------------------------------------------------------------

Date: Tue, 16 Oct 84 11:45:49 cdt
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: AI applications in agriculture

I would like to know of any work in applying AI techniques to improve
agricultural production.  Tou at Florida and Michalski at Illinois had
some things going; what is the status of these projects?  Is there
anything else going on?

Thanks in advance for any help you can give me.

Walt Rudd
Department of Computer Science
298 Coates Hall
Louisiana State University
Baton Rouge, Louisiana 70803
rudd@lsu

------------------------------

Date: 3-Oct-84 23:53 PDT
From: William Daul / Augmentation Systems Div. / McDnD <WBD.TYM@OFFICE-1.ARPA>
Subject: PC <--> Biofeedback Instrument Link (info wanted)

A friend has asked me to see if I can uncover some information for him.
So...here goes...

   He wants to connect an EEG biofeedback instrument to a personal computer
   (IBM or APPLE).  He hasn't decided on which.

   1.  What are the necessary componets of such a system (hard disk, disk
   controller, etc)?

   2.  He wants to get a spectrum analysis (FFT) of the recordings, both real
   time and compressed.  Does anyone know of existing software he could use?

   Emre Konuk
   MRI
   555 Middlefield Rd.
   Palo Alto, CA.  94301
   Tel: 415-321 3055 -- wk
        415-856 0872 -- hm

I suspect he would like to know if anyone knows of existing groups doing similar
work.  If you have information, you can send it to me "electronically" and I
will pass it on to him.  Thanks,  --Bi//  (WBD.TYM@OFFICE-2.ARPA)

------------------------------

Date: 15 Oct 84 16:55:43 PDT (Monday)
From: Cornish.PA@XEROX.ARPA
Subject: InterLisp-D based Database Management Systems

I would like information on any Database Management Systems that are
implemented in InterLisp-D.  More generally, I'd like literature
pointers to the issues of Database Management in AI.

Thank you,

Jan Cornish

------------------------------

Date: 14 Oct 1984 21:00-EST
From: George.Wood@CMU-CS-G.ARPA
Subject: Another OPS5 Version

There is also a Common Lisp version of OPS5, running on VAX/VMS Common lisp,
PERQ (Spice) Lisp, Data General's Common lisp for the MV 4000/8000/10000
series, and Symbolics 3600 in common lisp mode. This version was derived
from Forgy's Franz Lisp Implementation by George Wood (GDW@CMU-CS-PS1)
with help from Dario Giuse (Dario.Giuse@CMU-CS-SPICE) on the PERQ
version and standardization.

Sorry this missed the original call for information.

------------------------------

Date: 16 Oct 84 14:35 PDT
From: Tom Perrine <tom@LOGICON.ARPA>
Subject: OPS5E and Verac

Verac has moved. The new address is:
        Verac
        9605 Scranton Rd. Suite 500
        San Diego, CA 92121
        Attn: Pete Paine
        (619)457-5550

I believe that you must already have OPS5 before you can get OPS5E,
which is OPS5E(xtended).  It runs on all of the Symbolics machines, and
(now) also the TI Explorer.

------------------------------

Date: 15 October 1984 18:32-EDT
From: George J. Carrette <GJC @ MIT-MC>
Subject: LMI, TI, and Lisp Benchmarks.

Note: Comments following are due to George Carrette and Ken Sinclair,
      hackers at LMI, mostly covering specific facts which have been
      disclosed in previous announcements in "the trades."

* As far as benchmarks are concerned we would suggest that people
  at least wait until RPG publishes his results, which we consider to
  be the most serious effort to honestly represent the speed capabilities
  of the various machines.

* TI and LMI OEM arrangements.
  (1) LMI buys NuMachines on an OEM basis from TI. To these LMI adds
      the LAMBDA processor, software to support multiple LAMBDA and
      68000 Unix processors to run together on the NuBus, sharing
      disks, ethernet, and other devices.
  (2) LMI has a license to build NuMachines.
  (3) It was a technology transfer agreement (license) between LMI and TI that
      led to the transfer of technology to TI which was the basis of
      the Explorer.
  (4) LMI has an OEM agreement to purchase Explorers from TI.
      To these we will add our own microcode, optimizing compiler,
      and other products to be announced.


[Thank you very much for the reliable information.  I'm afraid most of
us don't keep up with the trade press, and messages like yours are a
great help.

A reader providing benchmarks a year ago (some of RPG's old benchmarks,
in fact) was chastised for not waiting for RPG's report.  At the time,
I had never heard of RPG; I assume many other people still have not.
If he hurries he may be able to benchmark the machines before the
good citizens of Palo Alto start using them for doorstops.  Meanwhile,
I see no harm in someone publishing timing statistics as long as he
offers to provide the code involved.

One further note: the benchmarks recently published in AIList were
originally circulated privately.  It was at my request that they
were made available to the list.  I thank Dr. Pentland for letting
me pass them along, and I regret any inconvenience he may have had
as a result.  -- KIL]

------------------------------

Date: Fri, 12 Oct 84 13:26:18 EDT
From: Stephen Miklos <Miklos@YALE.ARPA>
Subject: Liability of software vendors


>     "Maybe I am being naive or something, but I don't see why
>     AI software should
>     be different from any other when it comes to the liability of the vendor.
>     My attorney has written me a boilerplate contract that contains a clause
>     something to the effect that "vendor is not liable for third-party or
>     consequential damages that result from the use of the product."
>     Doesn't that take care of the problem?  If not, maybe I had better find
>     an expert attorney system."

Afraid not. Product liability can jump over the middleman (here the
doctor) and is not a contractually-based liability, thus contract terms
between the software vendor and the doctor or hospital cannot prevent
the liability from attaching. If the aggrieved party sued the doctor,
the doctor could not turn around and sue the software vendor (due to
the limitation of liability clause given above) but the aggrieved party
could sue the software vendor directly and avoid the contract
limitation (since he never signed any contract with the vendor).

  So much for standing to sue. As far as actual liability is concerned,
it becomes dicy. Products Liability relies on a product being used
in the normal way it is intended to be used causing some kind of
injury. It seems to me that the cause of the injury is the doctor's
reliance on the software, and therefore the doctor is the "proximate
cause." If, however, the particular software product becomes widely
used by doctors, the causation seems to shift. A reason for this might
be that a single doctor trying out a new piece of technology is
responsible for taking greater care to make sure it works than is a
doctor who is doing what is accepted in the medical community. For
instance, a medical malpractice charge can be avoided by proving that
all the doctor's actions were such as would be recommended by the
medical community in touch with the state of the art.

So, an experimental medical program ought to be safe--the doctor is
the guilty party for fooling around with experimental stuff while
treating a patient (at least without getting a waiver). But an
established program that has a deeply hidden bug in it is the stuff
plaintiffs' fortunes are made on.

By the way, you are not naive in assuming that an ai program will not
be treated differently by the courts than a regular program. But what
the AI program is trying to do--make judgments, diagnose illnesses, god
knows what all else--will introduce the risk of injury. No one is
going to be killed by a defective copy of Visi-calc.

****Disclaimer****--> I got my law degree back in '79, but I am not
now, and never have been, a practising attorney in any jurisdiction.
(I did pass the Connecticut Bar Exam.) These remarks are not to be
construed as legal advice, and should not be relied on as such by
anyone. These remarks are also not necessarily the opinions of my
employer, or of Mario Cuomo, whom I have never met.

                                  Stephen J. Miklos
                                  Cognitive Systems
                                  New Haven, CT

------------------------------

Date: Mon 15 Oct 84 08:48:26-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: AI List--Crediting Ideas From AI List

My first reaction to the question about how to cite something from AI List
is that it is an organized form of communication.  That is, there are dates,
volumes, numbers, an electronic place etc.  To me, this is what distinguishes
it from "just a communication channel" like the telephone or the xerox
copier.  I view AI List much closer to the journals but in electronic format.
Therefore if I were to cite something from AI LIst, I would use the format
for journal articles: author, possibly topic for title of comment, AI List
for title; the number, volume, and date of the list; and one additional
item, the electronic address.  If these lists are going to be kept and
can be looked up and referred to, I would recommend as complete a citation
as possible.

If AI List is viewed as more closely related to informal communications between
researchers, then the format would be that which one uses when referrring
to a conversation or personal letter.  However to me that would indicate
that another person would not have access to the primary discussion.

Harry Llull, Mathematical and Computer Sciences Library, Stanford University.

------------------------------

Date: 15-Oct-84 14:10 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2.ARPA>
Subject: Re: AILIST as a source of info....

From: Allen <Lutins@RU-BLUE.ARPA>

  Many recent AILIST discussions have fascinated me, and I'm sure that at some
  point in the near future I'll be using information presented here for a paper
  or two.  Just exactly how do I credit an electronic bboard in a research
  paper?  And who (i.e. moderator, author of info, etc.) do I give credit to?

This reminds me of Ithiel de Sola Pool's lament in note 8 to a paragraph in his
chapter on electronic publishing in Technologies of Freedom (Belknap Harvard
1983):

   "... The character of electronic publishing is illustrated by the problem of
   citing the information in this paragraph, which came from these interest
   group exchanges themselves.  Shall I cite the Arpanet list as from Zellich
   at Office-3?"

I am NOT an expert on obscure citations, so I can freely throw out the
following suggestion using Allen Lutins' original query for an
example.  "12345" would be the message ID if any had been provided:

   Lutins, Allen, "AILIST as a source of info...." message 12345 of 14 Oct
   1984 19:56 EDT, Lutins@RU-BLUE.ARPA or AIList Digest, V2 #138, 15 Oct 1984,
   AIList@SRI-AI.ARPA.

 -- kirk

[Alas, the title of a message is not a good identifier.  Many of the
messages in the AIList mailbox have meaningless titles (e.g., Re:
AIList Vol. 2, No. 136) or titles appropriate to some other bboard.
Some even have no titles.  I commonly supply another title as a service
to readers and as aid to my own sorting of the messages.  The title
sent out to Arpanet readers may thus differ from the title Usenet
readers see before I get the messages.  -- KIL]

------------------------------

Date: 15 October 1984 2252-PDT (Monday)
From: bannon@nprdc (Liam Bannon (UCSD Institute for Cognitive Science))
Reply-to: bannon <sdcsla!bannon@nprdc>
Subject: citing information on electronic newsboards

Allen Lutins query about how to cite information obtained from AIList
interests me, as I have confronted this issue recently. I sent out a
query on netnews on "computer-mediated social interaction" (it even
got on this List) and received a no. of interesting replies. I just
sent out a note on the "results" to net.followup, including quotations
from several msgs sent to me. I don't identify authors explicitly,
partly because of requests for anonymity. (I have however privately
acknowledged the contributions, and certainly do not try to pass them
off as being my own work.) I think this is ok for a net reply, but as
I am writing a large paper on the topic, I have decided to explicitly
ask all the people that I quote  a) for permission to quote them, and
b)for permission to include their names with the quotes.

As to citing AIList, or net.general, or whatever, some of the msgs
sent to me were also broadcast to a newsgroup, others
were sent privately over the net to me, so I am unsure how to
cite them.  It is an interesting issue though, as if credit is
not given properly for ideas that first appeared on the net, then
there is a danger that people will be reluctant to share ideas on
the net until after "official" publication, thus destroying the
vitality of the net. I'll go ask some librarians to see if they
have any thoughts. I would be interested in other people's opinions
on the issue.
-liam bannon (bannon@nprdc)

------------------------------

Date: Tue, 16 Oct 1984  14:08 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: AILIST as a source of info....


    [Certainly the author must be credited.  ... ]

I'm not a librarian but have had some experience in citing obscure
reference.  I think it can be cited just like a newsletter is cited,
after all, it is a newsletter: citing author, title, newletter name,
Vol., and No.; maybe method of publication (ARPANET).  It is a form of
publication, though informal, just like a newsletter.  As for
copyright, I don't see that there is any problem since none of the
authors I've seen have ever copyrighted their material.  I'm assuming
it's fair game for copying, but that scientific (or literary) protocol
would oblige us to credit authors.

Fanya


[The welcome message I send out to each new subscriber states:

  List items should be considered unrefereed working papers, and
  opinions to be those of the author and not of any organization.
  Copies of list items should credit the original author, not
  necessarily the AIList.  The list does not assume copyright, nor does
  it accept any liability arising from remailing of submitted material.

The phrase "working papers" (which is also used by the SIGART newsletter)
is intended to mean that the author is not ready to officially publish
the material and thus is not surrendering copyright.  This might not
hold up in court, but it does establish the context in which people have
been submitting their material.

I have not been as strict as some list moderators in protecting authors
against unauthorized copying.  (The Phil-Sci list is/was particularly
strict about this.)  I have treated AIList as just another bboard that
happens to have a distributed readership.  I have forwarded items to
AIList from university bboards (as well as physical bboards), and I have
no objection to similar copying in return.  I would draw the line at
some major journal or copyrighted book quoting directly from the list
without at least asking the readership whether anyone objected to the
copying.  As I do not hold copyright, however, it really makes no
difference where I draw the line.  If someone copies material and the
author sues, the resolution will be up to a judge.  All that I can do
is to clarify the intention that should be ascribed to submitters in
the absence of other declarations.  -- KIL]

------------------------------

End of AIList Digest
********************

∂18-Oct-84  1240	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #141    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 18 Oct 84  12:39:49 PDT
Date: Thu 18 Oct 1984 10:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #141
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Oct 1984     Volume 2 : Issue 141

Today's Topics:
  LISP - Common Lisp Flavors,
  AI Tools - OPS5 & Benchmarks,
  Linguistics - Language Evolution & Sastric Sanskrit & Man-Machine Language,
  AI - The Two Cultures
----------------------------------------------------------------------

Date: Thursday, 18 Oct 1984 06:11:24-PDT
From: michon%closus.DEC@decwrl.ARPA  (Brian Michon  DTN: 283-7695 FPO/A-3)
Subject: Common Lisp Flavors

Is there a flavor package for Common Lisp yet?

------------------------------

Date: 18 Oct 84 02:26 PDT
From: JonL.pa@XEROX.ARPA
Subject: OPS5 & Benchmarks

Two points, inspired by issue #140:

1) Xerox has a "LispUsers" version of OPS5, which is an unsupported
transliteration from the public Franz version, of a year or so ago, into
Interlisp-D.  As far as I know, this version is also in the public
domain.  [Amos Barzilay and myself did the translation "in a day or so",
but have no interest in further debugging/supporting it]


2)   Richard Gabriel is out of the country at the moment; but I'd like
to take a paragraph or two to defend his benchmarking project, and
report on what I witnessed at the two panel discussions sessions it
sponsored -- one at AAAI 83 and the other at AAAI 84.  The latter was
attended by about 750+ persons (dwindling down to about 300+ in the
closing hours!).  In 1983, no specific timing results were released,
partly because many of the machines under consideration were undergoing
a very rapid rate of development; in 1984, the audience got numbers
galore, more perhaps than they ever wanted to hear.  I suspect that the
TI Explorer is also currently undergoing rapid development, and numbers
taken today may well be invalid tomorrow (Pentland mentioned that).
     The point stressed over and over at the two panel sessions is that
most of these benchmarks were picked to monitor some very specific facet
of Lisp performance, and thus no single number could adequately compare
two machines.  In the question/answer session of 1983, someone tried to
cajole some such simplistic ratio out of Dr Gabriel, and his reply is
worth re-iterating "Well, I'll tell you -- I have two machines here, and
on one of the benchmarks, they ran at the same speed; but on another
one, there was a factor of 13 difference between them.  So, now, which
number do you want?  One, or Thirteen?"
     One must also note that many of the more important facets for
personal workstations were ignored -- primarily, I think because it's so
hard to figure out a meaningful statistic to monitor for them, and
partly because I'm sure Dick wanted to limit somewhat the scope of his
project.  How does paging figure into the numbers?  if paging is
factored out, then what do the numbers mean for a user who is frequently
swapping?  What about local area network access to shared facilities?
What about the effects of GC?  I don't know anyone who would feel
comfortable with someone else's proposed mixture of "facets" into a
whetstone kind of benchmark; it's just entirely possible that the
variety of facet mixtures found in Lisp usage is much greater than that
found in Fortran usage.  [Nevertheless, I seem to remember that the
several facets reported upon by Pentland are at the core of almost any
Lisp (or, rather, ZetaLisp-like Lisp) -- function call, message passing,
and Flavor creation -- so he's not entirely off the wall.]
     In summary, I'd say that both manufacturers and discerning buyers
have benefited from the discussions brought about by the Lisp timings
project; the delay on publication of the (voluminous!) numbers has had
the good effect of reminding even those who don't want to be reminded
that *** a single number simply will not do ***, and that "the numbers",
without an understanding analysis, are meaningless.  Several of the
manufacturer's representatives even admitted during the 1984 panel
sessions that their own priorities had been skewed by monitoring facets
involved in the Lisp system itself, and that seeing the RPG benchmarks
as "user" rather than "system" programs gave them a fresh look at the
areas that needed performance enhancements.


-- Jon L White --

------------------------------

Date: 18 October 1984 0646-PDT (Thursday)
From: mbr@nprdc
Reply-to: mbr@NPRDC
Subject: Re: Timings

I along with about 8 million others heard RPG (Richard Gabriel) talk
at AAAI this year and at the Lisp Conference both this year and 2
years ago, so the benchmarks are around. I dunno if he has the
results on line (or for that matter what his net address is--
he was at LLL doing common lisp for the S1 last I heard), but
someone in net land might know, and a summary could be posted to
AIList mayhaps?

Mark Rosenstein


[Dr. Gabriel is on the net, but I will let him announce his own
net address if he wishes to receive mail on this subject.  -- KIL]

------------------------------

Date: 15 Oct 1984 09:40-EST
From: Todd.Kueny@CMU-CS-G.ARPA
Subject: Language Evolution - Comments

For what its worth:

Any language in use by a significant number of speakers is under
constant evolution.  When I studied ancient Greek only singular and
plural were taught; dual was considered useful only for very old texts,
e.g. Homer or before.  The explanation for this was twofold:

        1) as the language was used, it became cumbersome to worry about
           dual when plural would suffice.  The number of endings for
           case, sex and so on is very large in ancient Greek; having
           dual just made things more cumbersome.

        2) similarly, as ancient Greek became modern Greek, case to a
           large extent vanished.  Why? Throughout its use, Greek
           evolved many special forms for words which were heavily used,
           e.g. to be. Presumably because no one took the time to speak
           the complete original form and so its written form changed.

I pose two further questions:

        1) Why would singular, dual, and plural evolve in the first
           place?  Why not a tri and quad as well?  Dual seems to be
           (at least to me) very unnatural.

        2) I would prefer English to ancient Greek principally because
           of the lack of case endings and conjugations.  It is very
           difficult to express certain new ideas, e.g. the concept of a word
           on its own with no sex or case, in such a language.  Why
           would anyone consider case useful?

                                                        -Todd K.

------------------------------

Date: 15 Oct 1984 09:52-PDT (Monday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Re: Langauge Evolution


        Why do languages move away from case?  Why did Sastric Sanskrit
die?  I think the answer is basically entropy.  The history of
language development points to a pattern in which linguists write
grammars and try to enforce the rules(organization), and the tendency
of the masses is to sacrifice elaborate case structures etc. for ease
of communication.
        One of the reasons Panini codified the grammar of Sanskrit so
carefully is that he feared a degeneration of the language, as was
already evidenced by various "Prakrits" or inferior versions of
Sanskrit spoken by servants etc.  The Sanskrit word for barbarian
was "mleccha" which means "one who doesn't speak Sanskrit"; culture
and high civilization were equated with language.  Similarly English
"barbarian" is derived from the greek "one who makes noises like
baa baa" i.e. who doesn't speak Greek.
        Current Linguistics has begun to actually aid this entropy by
paying special attention to slang and casual usage(descriptive vs.
prescriptive).  Without some negentropy from the linguists, I fear
that English will degenerate further.

Rick Briggs

------------------------------

Date: Monday, 15-Oct-84 19:32:13-BST
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.%edxa@ucl-cs.arpa>
Subject: Sastric Sanskrit again

     Briggs' message of 9 Oct 84 makes things a lot clearer.
The first thing is that Sastric Sanskrit is an artificial language,
very like Fitch's "unambiguous English" subset (he is a philosopher
who has a paper showing how this rationalised dialect is clear
enough so you can do Natural Deduction proofs on it directly).

     One thing he confuses me about is case.  How is having case
a contribution to unambiguity?  What is the logical difference
between having a set of prepositions and having a set of cases?
Indeed, most languages that have cases have to augment them with
prepositions because the cases are just too vague.  E.g. English
has a sort of possessive case "John's", but when we want to be
clear we have to say "of John" or "for John" or "from John" as
the case may be.  Praise of Latin is especially confusing, when
you recall that (a) that language hasn't got a definite article
(it has got demonstratives) and (b) the results of a certain
church Council had to be stated in Greek because of that ambiguity.
If you can map surface case to semantic case, surely you can map
prepositions to semantic case?

     The second thing which Briggs makes clear is that Sastric
Sanskrit is unbelievably long-winded.  I do not believe that it can
ever have been spontaneously spoken.

     The third thing is that despite this it STILL isn't unambiguous,
and I can use his own example to prove it.

     He gives the coding of "Caitra cooks rice in a pot", and
translates it back into English as "There is an activity(vyaapaara:),
subsisting in the pot, with agency residing in one substratum not
different from Caitra, which produces the softening which subsists in
rice."  Is Caitra BOILING the rice or STEAMING it?  It makes a
difference!  Note that this doesn't prove that Sastric Sanskrit
can't describe the situation unambiguously, only that it contains at
least one ambiguous sentence.  Then too, suppose I wanted to
translate this into Greek.  I need to know whether or not to use
the middle voice.  That is, is Caitra cooking the rice for HIMSELF,
or for someone ELSE?  Whichever choice I make in my translation, I
run the risk of saying something which Briggs, writing Sastric
Sanskrit, did not intend.  So it's ambiguous.

     Now that Briggs has made things so much clearer, I would be
surprised indeed if AI couldn't learn a lot from the work that
went into the design of Sastric Sanskrit.  Actually using their
formalism for large chunks of text must have taught its designers
a lot.  Though if "blackbird" really is specified as "a colour-
event residing in a bird" the metaphysical assumptions underlying
it might not be immune to criticism.

     A final point is that we NEED languages which are capable
of coding ambiguous propositions, as that may be what we want to
say.  If Briggs see Caitra cooking some rice in a pot, he may
not KNOW whether it is for Caitra or for another, so if Briggs
is going to tell me what he sees, he has to say something I may
regard as ambiguous.  Similarly, when a child says "Daddy ball",
that ambiguity (give me the ball?  bounce the ball? do something
surprising with the ball?) may be exactly what it means to say;
it may have no clearer idea than that it would like some activity
to take place involving Daddy and the ball.  A language which is
incapable of ambiguous expression is suited only to describing
mathematics and other games.

------------------------------

Date: 16 Oct 84 11:01:49-CDT (Tue)
From: "Roland J. Stalfonovich" <rjs%okstate.csnet@csnet-relay.arpa>
Subject: AI Natural Language

Much has been said in the last few notes about old or forgotten human
languages.  This brings up an interesting point.
Has anyone thought of making (or is there currently) a 'standard' language
for AI projects?  Not a programming language, but rather a communication
language for interspecies communication ,man to machine-man (that is the whole
hope of AI after all).

Several good choices exist and have existed for several generations.  The
languages of Esperanto and Unifon are two good choices for study.
Esperanto was devised around the turn of the century for the purpose of
becoming the international language of the world.  To these ends it has
obviously failed.  This does not however say that it is not without merit.
It's advantages of an organized verb conjugation and easy noun and pronoun
definition make it a good choice for an 'easily implemented' language.
Unifon is a simplification of English.  It involves the replacement of the
26 characters of the English alphabet by a set of 40 characters representing
the 40 phonics (thus the name) of the English language.  This would allow
the implementation of the language for speech synthesis (a pet project of many
research groups).

There are many more languages, and I am sure that everyone has his or her
own favorite.  But for the criteria of being easily implemented on a computer
in both the printed and spoken form, Esperanto and/or Unifon should be
seriously considered.

------------------------------

Date: Mon 15 Oct 84 11:22:35-PDT
From: BARNARD@SRI-AI.ARPA
Subject: The Two Cultures of AI

It seems to me that there are two quite separate traditions in AI.
One of them, which I suppose includes the large majority of AI
practitioners, is devoted to rule-based deductive methods for problem
solving and planning. (I would include most natural language
understanding work in this category, as well.)  The other, which
occupies a distinctly minority position, is concerned with models of
perception --- especially visual perception.  It is my experience that
the followers of these two traditions often have trouble
communicating.

I want to suggest that this communication problem is due to the
fundamental difference in the kinds of problems with which these two
groups of people are dealing.  The difference, put simply, is that
"problem solving" is concerned with how to find solutions to
well-posed problems effectively given a sufficient body of knowledge,
while "perception" is concerned with how to go beyond the information
given.  The solution of a well-defined problem, once it is known, is
known for certain, assuming that the knowledge one begins with is
valid.  Perception, on the other hand, is always equivocal.  Our
visual ability to construct interpretations in terms of invariant
properties of physical objects (shapes, sizes, colors, etc.) is not
dependent on sufficient information, in the formal logical sense.

As a researcher in perception, I have to admit that I am often annoyed
when problem-solving types insist that their formal axiomatic methods
are universal in some sense, and that they essentially "define" what
AI is all about.  No doubt they are equally annoyed when I complain
about the severe limitations of the deductive method as a model of
intelligence, and relentlessly promote the inductive method.  I'll
end, therefore, with a plea for tolerance, and for a recognition that
intelligence may, and in fact must, incorporate both "ways of
knowing."

------------------------------

End of AIList Digest
********************

∂19-Oct-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #142    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Oct 84  11:46:32 PDT
Date: Fri 19 Oct 1984 09:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #142
To: AIList@SRI-AI


AIList Digest            Friday, 19 Oct 1984      Volume 2 : Issue 142

Today's Topics:
  Applications - Biofeedback Instrument Link,
  LISP - Common Lisp Flavors,
  AI Tools - TI Expert System Development Tool & Benchmarks,
  Linguistics - Languages and Cases,
  Knowledge Representation - Universal Languages,
  Administrivia - Sites Receiving AIList & Net Readership
----------------------------------------------------------------------

Date: 18 Oct 84 14:28:34 EDT
From: kyle.wbst@XEROX.ARPA
Subject: Biofeedback Instrument Link

The John F. Kennedy Institute For Handicapped Children (707 North
Broadway, Baltimore, Maryland 21205 phone 955-5000) has done work in
this area. Contact Lynn H. Parker, or Dr. Michael F. Cataldo. They have
also published in things like  Journal of Behavioral Medicine.

Dr. D. Regan at Dalhousie University Department of Psychology Halifax,
N.S. B3H 4J1 has also done a lot in this area including the real time
Fourier analysis in a feedback loop. You can read about his work in the
Dec. 1979 issue of Scientific American (Vol. 241, No. 6 around p. 144 as
I recall).

At Carnegie-Mellon University were some people with experience in this
area. You may try contacting: A. Terry Bahill (Bioengineering ); Mark B.
Friedman (Psychology and EE). They may also be able to put you in touch
with a person they worked with about 4 years ago at the Pittsburgh Home
for Crippled Children called Mata Loevner Jaffe. I think she left full
time status at HCC and is now a professor at the University of
Pittsburgh.

If you want historical info, look in the literature for a system called
PIAPACS (I forgot what the acronym stands for now) that was developed by
LEar Siegler Co. in Michigan for test pilots at Edwards Air Force Base
in California in the mid 1960's.

And finally there is the historical work at Cambridge Air Force Research
Labs in the early 1960's to put a man in a feedback loop to use
amplitude modulation of the brain waves (alphas) to send morse code via
a PDP-8 (to clean up the signals and do some limited pattern
recognition) to a teletypewriter to transmit the first message
"CYBERNETICS". Shortly thereafter, Barbara Brown (I'm not sure of the
first name here) at the VA Hospital in Los Angeles used BFT techniques
to have subjects control lights and small model railroad trains.

Earle.

P.S. The ultimate source of commercially available hardware and software
in this area would be the TRACE Center at the University of Wisconsin at
Madison.

------------------------------

Date: Thu, 18 Oct 1984  17:53 EDT
From: Steven <Handerson@CMU-CS-C.ARPA>
Subject: Common Lisp Flavors


I am working on Flavors as part of the Spice Lisp project at CMU.  Although a
prototype system has been finished, we are currently in the process of
redesigning the thing from the ground up in an attempt to make it more modular
and portable [we've pretty much trashed the idea of a "white-pages"
(manual-level) object-oriented interface for now].  Could be another month.

-- Steve <Handerson at CMU-CS-C>

------------------------------

Date: Thu, 18 Oct 84 11:43:53 pdt
From: Stanley Lanning <lanning@lll-crg.ARPA>
Subject: Expert System Development Tool from TI

[From the October 1984 issure of Systems & Software magazine, page 50]

  TI AI tool prompts users to develop application


With many companies now entering the artificial-intelligence business, the
question, "Are there enough AI experts to write the programs?" has been
raised.  The answer is that Ph.D.s in AI are no longer needed to write expert
systems because several expert-system-development tools are available,
including one just introduced by Texas Instruments.

To ensure that AI tools can be used by nonexperts, Texas Instruments has
introduced a first-of-a-kind tool that prompts users for all information
needed to develop an expert system.  The Personal Consultant is a menu-and
window-oriented system that devolps rule-based, backward-chaining expert
systems on the TI Professional Computer under MS-DOS operating systems...

------------------------------

Date: 19 October 1984 12:07-EDT
From: George J. Carrette <GJC @ MIT-MC>
Subject: LMI, TI, and Lisp Benchmarks.

Glad to be of some help. The main problem I had with Pentland's note
was the explanatory comments which were technically not as informative
as they could have been. Let me take a moment to review them:

(1) BITBLT. This result has more to do with the different amounts
    of microcode dedicated to such things and the micro instruction
    execution speed. Both the TI and 3600 have a simple and fast
    memory bus talking to similar dynamic ram technology. (On the
    other hand the LAMBDA has a cache and block/read capability)
(2) FLOATING POINT. Unless TI has extensively reworked the seldomly used
    small-floating-point-number code from what LMI sent them, it is the
    case that small floats are converted into longs inside the microcode
    and then converted back.
(2)(3) CONS & PAGING. ??? Would be more interesting to know how long
    a full-gc of a given number of mega-conses takes. That bears more on the
    real overall cost of consing and paging.
(4) MAKE-INSTANCE. Could indeed be improved on both the TI and the 3600.
    People who need to make instances fast and know how usually resort
    to writing their own %COPY-INSTANCE, since overhead of system default
    MAKE-INSTANCE depends a lot on sending :INIT methods and other parsing
    and book-keeping duties.
(5)(6) SEND/FUNCALL. These are full microcoded, although improvements are
    possible. There are some fundamental differences between the
    LMI/TI micro architecture and the 3600 when it comes to function
    calling though. In a "only-doing-function-calls-but-no-work"
    kind of trivial benchmark there are good reasons why a
    LMI/TI architecture will never equal a 3600 architecture.
(7) 32bit floating. Similar comment as applied to small floats,
    there wasn't any 32-bit floating point number representation in the
    code before, the floating point numbers were longer than 32 bits total.

Then there was a reference to "it is already much more than an LMI,
CADR or LM2." First of all, the Explorer *is* an LMI product, and
secondly the main product line based on the LMI-LAMBDA has some
fundamentally different features including pageable microstore,
lisp->microcode compiler, plenty of room for user loadable microstore,
SMD disk interface, multiple-processor software support, physical memory
cache, which can very strongly and materially change the performance of many
applications interesting in both AI research and practice.
If you need raw performance in simulation, vision research, array processing,
the classic way to go is special microcode
or special purpose hardware. The rule may be that simple operations
(such as what one may find in trivial benchmarks) done many times call
for specilization. The LAMBDA has better support
for microcode development, (more statistics counters, micro history,
micro stack, micro store, the possibility of
doing lambda->lambda debug using multiple-processor lambda configuration,
paging microcode good for patching during development) than any other
lispmachine. Of course, it does have a high degree of microcode
compatibility with the Explorer, which does suggest some possible ways
to do things probably of interest more to applying technology than to pure
get-it-up-the-first-time research.

-gjc

------------------------------

Date: Thu 18 Oct 84 15:44:36-MDT
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Languages and cases

When we discuss why cases have disappeared, we should also consider why
they have appeared.  It is clear that they have appeared as "naturally" as
they have disappeared.  Which of these represents a rise in "entropy"?

A reasonable explanation seems to be that cases have appeared for the sake
of convenience and brevity.  Before their proliferation, probably
prepositions and suppositions were used.  Eventually, the cases became such
a burden that people moved away from their complexity.  Don't we see the
same trend in programming languages?

Uday Reddy

------------------------------

Date: Fri 19 Oct 84 09:22:50-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Universal Languages (again!)

Before worrying about a universal language for man-machine communication,
we need a universal mechanism for knowledge representation!  After all,
the external language cannot include concepts (words) for things that
are not internally expressible.  And while there have been numerous
;noimants for the status of a UKRL (Universal Knowledge Representation
Language) (one of my own projects included), there are none that can
really qualify, except perhaps on the basis of Turing-equivalence.
Perhaps the best overall candidate is some kind of logical formalism,
but as one makes the formalism more general, it seems to become
more content-free.  Seems to me (from examination of the literature)
that the search for a UKRL was very active about 3-5 years ago, but
that now everybody has given it up as being the wrong thing to look
for (does anybody who was there disagree with this analysis?).

These days, I'm inclined to believe that one might establish conditions
for *sufficiency* in a KRL.  There's the obvious condition that the
KRL should be Turing-equivalent.  Less obviously perhaps, the KRL should
also have the means of automatically translating expressions written
using that KRL to ones in some other KRL.  Also, the KRL should have
complete knowledge of itself (the second condition probably implies
this).  There may be other reasonable conditions (such as some condition
stating that KRL expressions should have some explicit relation to things
in the "real world"), but I think the three above should be a minimum.
Notice that they also make the question of a *single* UKRL irrelevant.
Two sufficiently powerful KRLs can translate themselves back and forth
freely, so neither is more "universal" than the other.  Notice also
that any given KRL must have knowledge of at least one other KRL, in
order to facilitate the translation process.  When such KRLs are
available, then we can profitably think about standard ways of
communicating (to ease the poor humans' difficulty with handling
69 KRLs all at once!)

                                                        stan shebs

ps I haven't actually seen any research along these lines (although
Genesereth and Mackinlay made some suggestive remarks in their AAAI-84
paper).  Is anybody out there looking at KRL translation, or maybe
something more specific, like OPS5 <-> Prolog?

------------------------------

Date: Thu 11 Oct 84 10:44:38-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Sites Receiving AIList

Readers at the following sites have responded to my Sep. 26 list of
AIList recipients (Volume 2, No. 125), or have since signed up for
the digest.  (There are still many other sites, of course, particularly
on Usenet.  I have also had contact with individuals who receive
the digest but cannot respond via the net.)


Army Ballistic Research Laboratory
Army Missile Command
Defense Communications Agency
DoD Computer Security Center
Edwards Air Force Base

Arthur D. Little, Inc.
Battelle Northwest (Pacific Northwest Laboratory)
Bell Communications Research
Interactive Systems Corporation
Lockheed
Microelectronics and Computer Corporation
Varian Associates

Case Western Reserve University
Dundee College of Technology, Scotland
Indiana University
Northeastern University
Southern Methodist University
Stockton State College
University of California at San Diego
University of Illinois at Urbana
University of Waterloo
Washington University in St Louis


My apologies to any sites I previously misspelled, including

Naval Personnel Research and Development Center
Naval Research Laboratory
Naval Surface Weapons Center
University of Rochester

                                        -- Ken Laws

------------------------------

Date: Wed, 10 Oct 84 01:50:10 edt
From: bedford!bandy@mit-eddie
Subject: Net Readership

     [Forwarded from the Human-Nets digest by Laws@SRI-AI.]

        Date: Mon, 8 Oct 84 14:28 EDT
        From: TMPLee@MIT-MULTICS.ARPA

        Has anyone ever made an estimate (with error bounds) of how
        many people have electronic mailboxes reachable via the
        Internet?  (e.g., ARPANET, MILNET, CHAOSNET, DEC ENET, Xerox,
        USENET, CSNET, BITNET, and any others gatewayed that I've
        probably overlooked?)  (included in that of course group
        mailboxes, even though they are a poor way of doing business.)

Gee, my big chance to make a bunch of order of magnitude
calculations.... [...]

USENET/DEC ENET: 10k machines, probably on the order of 40 regular
users for the unix machines and 20 for the "other" machines so that's
100k users right there.

  [Rich Kulaweic (RSK@Purdue) notes 15k users on 40 Unix machines
  at Purdue, with turnover of several thousand per year.  -- KIL]

BITNET: something like 100 machines and they're university machines in
general, which implies that they're HEAVILY overloaded, 100-200
regular active users for each machine - 10k users.

  [A news item in the latest CACM mentions 200 hosts at 60 sites,
  soon to be expanded to 200 sites worldwide.  A BITNET information
  center is also being developed by a consortium of 500 U.S.
  universities, so I expect they'll all get nodes soon.  -- KIL]

Chaos: about 100-300 machines, 10 users per machine (yes, oz and ee
are heavily overloaded at times, but then there's all those unused
vaxen on the 9th floor of ne43). 1k users for chaosnet.

I think that we can ignore csnet here (they're all either on usenet or
directly on internet anyway...), so they count for zero.

ARPA/MILNET: Hmm... This one is a little tougher (I'm going to include
the 'real' internet as a whole here), but as I remember, there are
about 1k hosts. Now, some of the machines here are heavily used
(maryland is the first example that pops to mind) and some have
moderate loads (daytime - lots of free hardware at 5am!), let's say
about 40 regular users per machine -- another 10k users.

I dare not give a guesstimate for Xerox.

  [Murray.PA@Xerox estimates 4000 on their Grapevine system.  -- KIL]

So it's something on the order of 100k users for the community. [...]
Well, it could be 50k people, but these >are< order of magnitude
calculations...

  [Mark Crispin (MRC@Score) notes that there are 10k addressable
  mailboxes at Stanford, but that the number of active users is
  perhaps only a tenth of this.  Andy's final estimate might be
  inflated or deflated by such a factor.  -- KIL]

Now that I've stuck my neck out giving these estimates, I'm awaiting
for it to be chopped off.

        andy beals
        bandy@{mit-mc,lll-crg}

------------------------------

End of AIList Digest
********************

∂20-Oct-84  2331	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #143    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 Oct 84  23:29:52 PDT
Date: Sat 20 Oct 1984 22:07-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #143
To: AIList@SRI-AI


AIList Digest            Sunday, 21 Oct 1984      Volume 2 : Issue 143

Today's Topics:
  Programming Languages - Buzzwords,
  AI Tools - LISP Machine Benchmarks,
  Linguistics - Language Evolution & Sastric Sanskrit,
  Seminar - Transformational Grammar and AI,
  PhD Oral: Theory-Driven Data Interpretation
----------------------------------------------------------------------

Date: 19 October 1984 22:52-EDT
From: Herb Lin <LIN @ MIT-MC>
Subject: buzzwords for different language types

Could someone out there please tell me the usual catch phrases for
distinguishing between languages such as C, Pascal, Ada on one hand
and languages such as LISP on the other?

Is it "structured" vs "unstructured"?  List vs ??

Thanks.

------------------------------

Date: Fri 19 Oct 84 13:08:44-PDT
From: WYLAND@SRI-KL.ARPA
Subject: LISP machine benchmarks

A thought for the day on the AI computer benchmark controversy.

We need a single, simple measure for machine quality in order to
decide which machine to buy.  It must be simple and general
because these are typically intended to be used as general
purpose AI research machines where we cannot closely define and
confine the application.

We already have one single, simple measure called price.  If
there is no *simple* alternative number based on performance,
others (i.e. those funding the effort) will use price as the only
available measure, and we will have to continually struggle
against it using secondary arguments and personal opinion.

It should be possible to create a simple benchmark measure.  It
will - of necessity - be highly abstracted, necessarily crude.
This has been done for conventional computer systems: the acronym
MIPs is now fairly common, for good or ill.  Yes, there are
additional measures, but they are used in addition to simple ones
like MIPs.

We need good, extensive benchmarks for these machines: they will
point out the performance bugs that are unique to particular
designs.  After we do the benchmarks, however, we need to boil it
down to some simple number we can use for general purpose
comparason to place in opposition to price.

------------------------------

Date: 19 Oct 84 10:32 PDT
From: Schoppers.pa@XEROX.ARPA
Subject: The Future of the English Auxiliary

In response to Ken Kahn's question on language evolution, my own theory
is that the invasion of a language by foreign cultures, or vice versa,
has a lot to do with how simple a language becomes: cross-cultural
speakers tend to use only as much as absolutely necessary for them to
consider themselves understood. The English spoken in some communities,
eg "Where they goin'?" (missing an auxiliary), "Why he be leavin'?"
(levelling the auxiliary), "He ain't goin' nowhere" (ignoring double
negatives), etc may well be indicative of our future grammar. On the
other hand, "Hey yous" for plural "you" (in Australia), and "y'all"
(here), are pointing towards disambiguation. Well, there does have to be
a limit to the simplification, lest we "new-speak double-plus ungood".
Then again, "ain't" can mean any one of "am not", "aren't", "isn't",
"haven't", "hasn't" --- effectively replacing both the primary English
auxiliaries (to be, to have) in all their conjugations! United States
"English", being the lingo of the melting pot, will probably change
faster than most.

Marcel Schoppers
Schoppers@XEROX

------------------------------

Date: Fri 19 Oct 84 15:23:26-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Cases & Evolution of Natural Language

Has anybody at all researched the origins of language?  Not an expert
on the subject, but I do know that the languages of aboriginal tribes
are extraordinarily complicated, as languages go.  But they probably
don't give us much clue to what the earliest of languages were like.
If you believe that the earliest of languages arose along with human
intelligence, then you can suppose that the most primitive languages
had a separate "word" for each concept to be expressed.  Such concepts
might include what would correspond to entire sentences in a modern
language.  Thus the most primitive languages would be completely
non-orthogonal.  When intelligence developed to a point where the
necessary vocabulary was just too complex to handle the wide range
of expressible concepts, then perhaps some individuals would start
grouping primitive sounds together in different ways (the famous
chimpanzee and gorilla language experiments suggest that other primates
already have this ability), resulting in the birth of syntactic
rules.  Obvious question:  can all known languages be derived
as some combination of arbitrarily bizarre syntactic/semantic rules?
(I would guess so, based on results for mathematical languages)

Word cases can then be explained as one of the last concepts to
be factored out of words.  In the most ancient Indo-European languages,
for instance, prepositions are relatively infrequent, although
the notions of subject, object, verb, and so forth have already
been separated into separate words.  Perhaps in the future, singular
and plural numbers will be separated out also (anyone for "dog es"
instead of "dogs"?).

                                                        stan shebs

------------------------------

Date: 19 Oct 1984 15:17-PDT (Friday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Sastric Sanskrit

        Firstly, the language is NOT artificial.  There is a LITERATURE
which is written in this language.  It is different from toy artificial
languages like Fitch's in that for three thousand years scientists
communicated and wrote texts in this language.  There are thus
two aspects which are interesting and relevent; one is that research
such as I have been describing was carried out in its peculiar context,
the other is that a natural language can function as an unambiguous,
inference-generating language without sacrificing simplicity or
stylistic beauty.
        The advantage of case is that (assuming it is a good case system)
you have a closed set with which a correspondance can be made with a
closed set of semantic cases, whereas prepositions can be combined in
a multitude of ways and classifying prepositions is not easy.
Secondly, the fact that prepositions are not attached to the word
allows a possibility for ambiguity "a boat on the river near the
tree" could be "a boat on the (river near the tree)" or "a boat (on the
river) near the tree". Attaching affixes directly to words allows you
(potentially) to express such a sentence without ambiguity.  The Sastric
approach is to allow one to express a sentence as a series of "facts",
each agreeing with "activity".  Prepositions would not allow this.
If one hears "John was killed", some questions come to mind: who did
it, how, why.  These are actually the semantic cases agent, instrument,
and semantic ablative (apaadaanakaaraka). Instead of "on" and "near"
one would say "there is a proximity, having as its substratum an
instance of boatness... etc." in Sastric Sanskrit.  The real question
is "How good a case system is it?".  Mapping syntactic case to semantic
is much easier than mapping prepositions since a direct correspondance
is found automatically if you have a good case system, whereas
prepositions do not lend themselves to easy classification.
        Again, Sanskrit is NOT long-winded, it is the english
translation which is, since their vocabulary and methodology was more
exact than that of English.
        "Caitra cooks rice in a pot" is not represented ambiguously.
Since it is not specified whether the rice is boiled, steamed, or fried
the correct representation should include the fact that the means of
softening the rice is unspecified, and the language does have the
ability to mark slots as unspecified (anabhihite).  Actually, cooking is
broken down even further (if-needed) and since rice is cooked by boiling
in India, that fact would be explicitly stated.  The question is how deep
a level of detail is desired, Sanskrit maintains: as far as is necessary but
"The notion 'action' cannot be applied to the solitary point reached by
extreme subdivision", i.e. only to the point of semantic primitives.
Sentences with ambiguity like "the man lives on the Nile" in Sastric
is made up of the denotative meaning (the man actually lives on the
river) and the implied meaning (the man lives on the bank of the Nile).
The latter is the default meaning unless it is actually specified
otherwise.  There is a very complex theory of implication in the
literature, but sentences with implied meanings are discouraged because:
"when purport (taatparya) is present, any word may signify any meaning",
thus the Sastric system where implied meanings are made explicit.
        I do not agree that languages need to tolerate ambiguity,
in fact that is my main point.  One can take a sentence like
"Daddy ball" and express it as an imperative of  "there is a
desire of the speaker for an unspecified activity involving the ball
and Daddy."  By specifying what exactly is known and what is unknown,
one can represent a vague mental notion as precisely as is possible.
But do we really need to allow such utterances?  Would something
humanistic be lost if children simply were more explicit?  Children
in this culture are encouraged to talk this way by adults engaging
in "baby talk".  All this points to the fact that the language you
speak has a tremendous influence on the your mental make-up.  If
a language more specific than english was spoken, our thoughts would
be more clear and ambiguity would not be needed.
        I conclude with another example:

  Classical Sanskrit--> raama: araNye baaNena baalinam jaghaana (Rama
  killed Baalin in the forest with an arrow) --->
  raamakartRkaa araNyaadhikaraNikaa baaNakaraNikaa praaNaviyogaanukuulaa
  parokSHaatiitakaalikii baalinkarmakaa bhaavanaa (There is an activity
  relating to the past beyond the speaker's ken, which is favourable to
  the separation of life, which has the agency of Rama, which has the
  forest as locus, Baalin as object, and which has the arrow as the
  implement.

Note that each word represents a semantic case with its instantiation,
(eg., raama-kartRkaa having as agent Rama), with the verb "kill"
(jaghaana) being represented as an activity which is favourable
(anukuulaa) to the separation (viyoga) of praana (life).  Thus the
sentence is a list of assertions with no possibility of ambiguity.
Notice that Sanskrit expresses the notion in 42 syllables (7 words)
and English takes 75 syllables (43 words).  This ratio is fairly
indicative of the general case.

Rick Briggs

------------------------------

Date: 19 Oct 1984  15:41 EDT (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Transformational Grammar and AI

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


        Transformational Grammar and Artificial Intelligence:
                        A View from the Bridge

                            Robert Berwick

It has frequently been suggested that modern linguistic theory is
irreconcilably at odds with a ``computational'' view of human
linguistic abilities.  In part this is so because grammars were
thought to consist of large numbers of explicit rules.  This talk
reviews recent developments in linguistic theory showing that, in
fact, current models of grammar are quite compatible with a range of
AI-based computational models.  These newer theories avoid the use of
explicit phrase structure rules and fit quite well with such
lexically-based models as ``word expert'' parsing.


Wednesday   October 24  4:00pm      8th floor playroom

------------------------------

Date: 19 Oct 84 15:35 PDT
From: Dietterich.pa@XEROX.ARPA
Reply-to: DIETTERICH@SUMEX-AIM.ARPA
Subject: PHD Oral: Theory-Driven Data Interpretation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

        PHD ORAL:       TOM DIETTERICH
                        DEPARTMENT OF COMPUTER SCIENCE

                        2:30PM OCTOBER 25
                        SKILLING AUDITORIUM


               CONSTRAINT PROPAGATION TECHNIQUES FOR
                 THEORY-DRIVEN DATA INTERPRETATION


This talk defines the task of THEORY-DRIVEN DATA INTERPRETATION (TDDI)
and investigates the adequacy of constraint propagation techniques for
performing it.  Data interpretation is the process of applying a given
theory T (possibly a partial theory) to interpret observed facts F and
infer a set of initial conditions C such that from C and T one can infer
F.  Most existing data interpretation programs do not employ an explicit
theory T, but rather use some algorithm that embodies T.  Theory-driven
data interpretation involves performing data interpretation by working
from an explicit theory.  The method of local propagation of constraints
is investigated as a possible technique for implementing TDDI.  A model
task--forming theories of the file system commands of the UNIX operating
system--is chosen for an empirical test of constraint propagation
techniques.  In the UNIX task, the "theories" take the form of programs,
and theory-driven data interpretation involves "reverse execution" of
these programs.  To test the applicability of constraint propagation
techniques, a system named EG has been constructed for the "reverse
execution" of computer programs.  The UNIX task was analyzed to develop
an evaluation suite of data interpretation problems, and these problems
have been processed by EG.  The results of this empircal evaluation
demonstrate that constraint propagation techniques are adequate for the
UNIX task, but only if the representation for theories is augmented to
include invariant facts about the programs.  In general, constraint
propagation is adequate for TDDI only if the theories satisfy certain
conditions: local invertibility, lack of constraint loops, and tractable
inference over propagated values.

------------------------------

End of AIList Digest
********************

∂24-Oct-84  1337	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #144    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Oct 84  13:37:10 PDT
Date: Wed 24 Oct 1984 11:47-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #144
To: AIList@SRI-AI


AIList Digest           Wednesday, 24 Oct 1984    Volume 2 : Issue 144

Today's Topics:
  Courses - Decision Systems & Introductory AI,
  Journals - Annotated AI Journal List,
  Automatic Programming - Query,
  AI Tools - TI Lisp Machines & TEK AI Machine,
  Administrivia - Reformatting AIList Digest for UNIX,
  Humor - Request for Worst Algorithms,
  Seminars - Metaphor & Learning in Expert Systems &
      Representing Programs for Understanding
----------------------------------------------------------------------

Date: Tue 23 Oct 84 13:33:06-PDT
From: Samuel Holtzman <HOLTZMAN@SUMEX-AIM.ARPA>
Subject: Responses to Decision Systems course.

Several individuals have requested further information on the course
in decision systems I teach at Stanford (advertised in AILIST a few
weeks ago).  Some of the messages I received came from non-ARPANET
sites, and I have had trouble replying electronically.  I would
appreciate getting a message from anyone who has requested information
from me and has not yet received it.  Please include a US (paper) mail
address for my reply.

Thanks,
Sam Holtzman
(HOLTZMAN@SUMEX or P.O. Box 5405, Stanford, CA  94305)

------------------------------

Date: 22 Oct 1984 22:45:40 EDT
From: Lockheed Advanced Software Laboratory@USC-ISI.ARPA
Subject: Request for information

A local community college is considering adding an introductory course in
AI to its curriculum.  Evening courses would be of benefit to a large
community of technical people interested in the subject.  The question
is what will be the benefit to first and second year students.

If anyone knows of any lower division AI courses taught anywhere, could
you please drop me a line over the net.

Also, course descriptions on introductory AI classes, either lower or
upper division, would be appreciated.

Comments on the usefulness or practicality of such a course at this level
are also welcome.

                                Thank You,
                                Michael A. Moran
                                Lockheed Advanced Software Laboratory

                                address: HARTUNG@USC-ISI

------------------------------

Date: Tue, 23 Oct 84 11:34 CDT
From: Joseph←Hollingsworth <jeh%ti-eg.csnet@csnet-relay.arpa>
Subject: annotated ai journal list


I am interested in creating an annotated list of the AI related journals list
that was published in AIList V1 N43.  I feel that this annotated list would be
beneficial for those persons who do not have easy access to the journals
mentioned in the previously published list, but who feel that some of them may
apply to their work.

I solicit information about each journal in the following form, (which I will
compile and release to the AIList if there is enough interest shown).

1) Journal Name
2) Subjective opinion of the type of articles that frequently appear in that
   journal (short paragraph or so).
3) Keywords and phrases that characterize the articles/journal, (don't let
   formalized keyword lists hinder your imagination).
4) The type of scientist, engineer, technician, etc. that the journal
   would benefit.
5) Address of journal for subscription correspondence, (include price too,
   if possible).

Please send this information to
Joe Hollingsworth at
  jeh%ti-eg@csnet-relay  (if you are on the ARPANET)
  jeh@ti-eg              (if you are on the CSNET; I am on the CSNET)


The following is the aforementioned list of journals:

AI Magazine
AISB Newsletter
Annual Review in Automatic Programming
Artificial Intelligence
Artificial Intelligence Report
Behavioral and Brain Sciences
Brain and Cognition
Brain and Language
Cognition
Cognition and Brain Theory
Cognitive Pshchology
Cognitive Science
Communications of the ACM
Computational Linguistics
Computational Linguistics and Computer Languages
Computer Vision, Graphics, and Image Processing
Computing Reviews
Human Intelligence
IEEE Computer
IEEE Transactions on Pattern Analysis and Machine Intelligence
Intelligence
International Journal of Man Machine Studies
Journal of the ACM
Journal of the Assocation for the Study of Perception
New Generation Computing
Pattern Recognition
Robotics Age
Robotics Today
SIGART Newsletter
Speech Technology

------------------------------

Date: 23 October 1984 22:28-EDT
From: Herb Lin <LIN @ MIT-MC>
Subject: help needed on automatic programming information

I need some information on automatic programming.

1.  How complex a problem can current automatic programming systems
handle?  The preferred metric would be complexity as measured by the
number of lines of code that a good human programmer would use to
solve the same problem.

2.  How complex a problem will future automatic programming systems be
able to handle?  Same metric, please.  Of course, who can predict the
future?  More precisely, what do the most optimistic estimates
predict, and for what time scale?

3.  In 30 years (if anyone is brave enough to look that far ahead),
what will automatic programming be able to do?

Please provide citable sources if possible.

Many thanks.

------------------------------

Date: 22 Oct 1984 12:07:39-PDT
From: William Spears <spears@NRL-AIC>
Subject: TI Lisp machines


     The AI group at the Naval Surface Weapons Center is interested in the new
TI Lisp Machine. Does anyone have any detailed information about it? Thanks.

                                       "Always a Cosmic Cyclist"
                                        William Spears
                                        Code N35
                                        Naval Surface Weapons Center
                                        Dahlgren, VA 22448

------------------------------

Date: 22 Oct 84 08:10:32 EDT
From: Robert.Thibadeau@CMU-RI-VI
Subject: TEK AI Machine

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

I have good product literature on the Tektronix 4404 Artificial
Intelligence System (the workbook for their people).  This appears
to be a reasonable system which supports Franz Lisp, Prolog,
and Smalltalk-80.  It uses a 68010 with floating point hardware
and comes standard with a 1024↑2 bit map, 20mb disk, floppy,
centronics 16 bit port, RS232, 3-button mouse, ethernet interface,
1 mbyte RAM, and a Unix OS.  The RAM upgrades at least 1 more mbyte
and you can have a larger disk and streaming tape. The major thing
is that the price (retail without negotiation) is $14,950 complete.
It is apparently real, but I don't know this system first hand.
The product description is all I have.

------------------------------

Date: Sat, 20 Oct 84 23:10:53 edt
From: Douglas Stumberger <des%bostonu.csnet@csnet-relay.arpa>
Subject: reformatting AILIST digest for UNIX


        For those of you on Berkeley  UNIX  installations,  there  is  a
program  available  which does the slight modifications to ailist digest
necessary so it is in the correct format for  a  "mail  -f  ...".   This
allows  using the UNIX mail system functionality to maintain your ailist
digest files.

For a copy of the program, net to:

douglas stumberger
csnet:  des@bostonu

------------------------------

Date: Mon 22 Oct 84 10:30:00-PDT
From: Jean-Luc Bonnetain <BONNETAIN@SUMEX-AIM.ARPA>
Subject: worst algorithms as programming jokes

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

After reading the recent complaint(s) about those people who slow down the
system with their silly programs to sort a 150-element list, and talking with
a friend, I came up with the following dumb idea :

A lot of emphasis is understandably put on good, efficient algorithms, but
couldn't we learn also from bad, terrible algorithms ? I have heard that Dan
Friedman at Indiana collects elegant LISP programs that he calls LISP poems.
To turn things upside down, how about LISP jokes (more generally, programming
jokes) ? I'm pretty sure most if not all of programmers have some day (night)
burst into laughter when encountering an algorithm that is particularly dumb,
and funny for the same reason.

I don't know whether anyone ever collected badgorithms (sorry, that was the
worst name I could find), so I suggest that you bright guys send me your
favorite entries.

To qualify as a badgorithm, the following conditions should be met:
(if you don't like them, send me your suggestions for a better definition)

1. It *is* an algorithm in the sense described by Knuth Vol 1.
2. It *does* solve the problem it addresses. Entering the Knuth-Bendix
   algorithm as a badgorithm for binary addition is illegal (though I admit it
   is somewhat funny).
3. Though it solves the problem, it must do so in an essentially clumsy way.
   Adding loops to slow down the algorithm is cheating. In some sense a
   badgorithm should totally miss the right structure to approach the problem.
4. The hopeless off-the-track-ness of a badgorithm should be humorous for
   someone knowledgeable with the problem addressed. We are not interested
   in alborithms, right ? Just being the second or third best algorithm for
   a problem is not enough to qualify (think of the "common sense" algorithm
   for finding a word in a text as opposed to the Boyer-Moore algorithm, or of
   the numerous ways to evaluate a polynomial as opposed to Horner's rule;
   there is nothing to laugh at in those cases). There is nothing funny in just
   being a O(n↑(3/(pi↑3)-1/e)) algorithm, I think.
5. It should be described in a simple, clear way. Remember that the best jokes
   are the shortest ones. I'm sure there are enough badgorithms for well-known
   problems (classical list manipulation, graph theory, arithmetic,
   cryptography, sorting, searching, etc). Please don't enter algorithms
   to solve NP problems unless you have good reasons to think they are
   interesting in our sense.




If anyone out there is willing to send me an entry, please send the following:

* a simple description of the problem (the name is enough if it's a well-known
  problem).
* a verbal description of the badgorithm if possible.
* a programmed version of the badgorithm (in LISP preferably). this is not
  necessary if your verbal description makes it clears enough how to write
  such a program, but still it would be nice.
* a description of a good algorithm for the same problem in case most people
  are not expected to be familiar with one. Comparing this to the badgorithm
  should help us in seeing what's wrong with the latter, and I would say that
  this could have good educational value.


To start things, let me enter my favorite badgorithm (I call it "stupid-sort"):

* the problem is to sort a list, according to some "order" predicate.
* well, that's easy. just generate all permutations of the list, and then
  check whether they are "order"ed. would you bet that someone in CS105
  does actually use this one ?

  [I once had to debug an early version of the BMD nonparametric
  package.  It found the min and max of a vector by sorting the
  elements ...   (Presumably most users would also request the
  median and other sort-related statistics.)  For a particularly
  slow sort routine see the Hacker's Dictionary definition of JOCK,
  quoted in Jon Bentley's April Programming Pearls in CACM.  -- KIL]


I understand perfectly that some people/organizations do not wish to have their
names associated with badgorithms, but please don't refrain from entering
something because of that. I swear that if you request it there will be no
trace of the origin of the entry if I ever compile a list of them for personal
or public use (you know, "name withheld by request" is the usual trick).

jean-luc

------------------------------

Date: 17 Oct 1984 16:25-EDT
From: Andrew Haas at BBNG.ARPA
Subject: Seminar - Metaphor

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Next week's BBN AI seminar is on Thursday, October 25th at 10:30
AM in the 3rd floor large conference room.  Bipin Indurkhya of
the University of Massachusetts at Amherst will speak on "A
Computational Theory of Metaphor Comprehension and Analogical
Reasoning".  Abstract follows.

   Though the pervasiveness and importance of metaphors in
natural languages is widely recognised, not much attention has
been given to them in the fields of Artificial Intelligence and
Computational Linguistics.  Broadly speaking, a metaphor can be
characterized as application of terms belonging to source domain
in describing target domain.  A large class of such metaphors are
based on structural analogy between the two domains.

   A computational model of metaphor comprehension was proposed
by Carbonell which required an explicit representation of a
mapping which maps terms of the source domain to the terms of the
target domain.  In our research we address ourselves to the
question of how one can characterize this mapping in terms of the
knowledge of the source and the target domains.

       In order to answer this question, we start from Gentner's
theory of Structure-Mapping.  We show limitations of Gentner's
theory and propose a theory of Constrained Semantic Transference
[CST] that allows part of the structure of the source domain to
be transferred to the target domain coherently.  We will then
introduce two recursive operators, called Augmentation and
Positing Symbols, that make it possible to create new structure
in the target domain constrained by the structure of the source
domain.

     We will show how CST captures several cognitive properties
of metaphors and then discuss its limitations with regard to
computability and finite representability.  If time permits, we
will use CST as a basis to develop a theory of Approximate
Semantic Transference which can be used to develop computational
models of the cognitive processes involved in metaphor
comprehension, metaphor generation, and analogical reasoning.

------------------------------

Date: Tue 23 Oct 84 10:45:51-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Learning in Expert Systems

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


DATE:        Friday, October 26, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Li-Min Fu
             Electrical Engineering

ABSTRACT:    LEARNING OBJECT-LEVEL AND META-LEVEL KNOWLEDGE IN EXPERT SYSTEMS

A high performance expert system can be built by exploiting machine
learning techniques.  A learning method has been developed that is
capable of acquiring new diagnostic knowledge, in the form of rules,
from a case library.  The rules are designed to be used in a
MYCIN-like diagnostic system in which there is uncertainty about data
as well as about the strength of inference and in which the rules
chain together to infer complex hypotheses.  These features greatly
complicate the learning problem.

In machine learning, two issues that can't be overlooked are
efficiency and noise.  A subprogram, called "Condenser," is designed
to remove irrelevant features during learning and improve the
efficiency.  It works well when the number of features used to
characterize training instances is large.  One way of removing noise
associated with a learned rule is seeking a state with minimal
prediction error.

Another subprogram has been developed to learn meta-rules which guide
the invocation of object-level rules and thus enhance the performance
of the expert system using the object-level rules.

By embodying all the ideas developed in this work, an expert program
called JAUNDICE is built, which can diagnose the likely cause and
mechanisms of a patient with jaundice.  Experiments with JAUNDICE show

the developed theory and method of learning are effective in a complex
and noisy environment where data may be inconsistent, incomplete, and
erroneous.

Paula

------------------------------

Date: Tue, 23 Oct 84 00:08:10 cdt
From: rajive@ut-sally.ARPA (Rajive Bagrodia)
Subject: Seminar - Representing Programs for Understanding

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

                      Graduate Brown Bag Seminar:

                Representing Programs For Understanding
                                  by
                              Aaron Temin

                         noon  Friday Oct. 26
                               PAI 3.36


        Automatic help systems would be much easier to generate than
        they are now if the same code used to create the executable
        version of a program could be used as the major database for
        the help system.  The desirable properties of such a program
        representation will be discussed.  An overview of MIRROR,
        our implementation of those properties, will be presented with
        an explanation of why MIRROR works.  It will also be argued
        that functional program representations are inadequate for the
        task.


If you are interested in receiving mail notifications of graduate brown bag
seminars in addition to the bboard notices, please send a note to
                            briggs@ut-sally

------------------------------

End of AIList Digest
********************

∂27-Oct-84  2326	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #145    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 Oct 84  23:25:47 PDT
Date: Sat 27 Oct 1984 21:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #145
To: AIList@SRI-AI


AIList Digest           Saturday, 27 Oct 1984     Volume 2 : Issue 145

Today's Topics:
  Administrivia - Usenet Disconnection,
  AI Languages - Buzzwords,
  Expert Systems - Logic-Based Expert Systems & Critique,
  Humor - Expert Systems & Recursive Riddle & Computational Complexity,
  Algorithms - Bad Algorithms as Programming Jokes,
  Seminars - Nonmonotonic Inference & Mathematical Language,
  Symposium - Expert Systems in the Government
----------------------------------------------------------------------

Date: Sat 27 Oct 84 21:36:47-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Usenet Disconnection

The SRI-UNIX host that has been the AIList gateway between Arpanet
and Usenet has been undergoing system changes.  This broke the
connection about a week ago, and I do not know how soon communication
will be restored.  Meanwhile the discussion continues asynchronously
in the two networks.

                                        -- Ken Laws

------------------------------

Date: Mon 22 Oct 84 11:18:59-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Re: buzzwords for different language types

My favorite buzzwords are "low-level" for C, Pascal, and Ada, and
"high-level" for Lisp  :-)

But seriously, one can adopt a very abstract (i.e. applicative/functional)
programming style or a very imperative (C-like) style when using Lisp.
On the other hand, adopting an applicative style in C is difficult (yes,
I've tried!).  So Lisp is certainly more versatile.  Also, Lisp's direct
representation of programs as data facilitates the construction of
embedded languages and the writing of program-analysing programs, both
important activities in the construction of AI systems.  On the other
hand, both of these are time-consuming, if not difficult to do in C or
Pascal.

Incidentally, these remarks largely apply to Prolog also (although Prolog
doesn't make it easy to do "low-level" programming).

                                                        stan shebs

------------------------------

Date: Thu 25 Oct 84 20:59:56-CDT
From: Charles Petrie <CS.PETRIE@UTEXAS-20.ARPA>
Subject: Logic-based Expert Systems

Regarding expert system tools: would anyone like to offer some reasoned
opinions regarding the suitability of logic-based systems for such?
I have no strong definition of "logic-based" to offer, but I have in
mind as prime examples MRS from Stanford and DUCK from SST which provide
interfaces to LISP, forward and back chaining, and various
extra-logical functions to make life easier for the system builder.  I
am interested in large systems (1000+ rules desirable) and the control
and performance problems and solutions that people have found.  Can
such systems be built successfully?  What techniques to constrain
search have been tried and worked/failed?  Any references?

Charles Petrie

------------------------------

Date: Sun, 21 Oct 84 20:28:24 pdt
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Expert system critique.

An article appears in the current (November/December) issue of
``The Sciences'' (New York Academy of Sciences) by Hubert and
Stuart Dreyfus of Berkeley.  The article ``Mindless Machines''
asserts that `computers don't think like experts, and never
will,' invoking, in part, Plato's ``Euthyphro'' (Euthyphro is
a theologian queried by Socrates as to the true nature of
piety) as an allegory.  The basic assertion is that so-called
expert systems reason purely from rules, whereas human experts
intuit from rules using the vast experience of special cases.
They cite this `intuition' as being an insurmountable barrier
to building intelligent machines.
                                            Harry Weeks
                                            (Weeks@UCBpopuli)

------------------------------

Date: Fri 26 Oct 84 06:46:39-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: is there an Expert-System like that ?? (-:

[ cartoon in from InfoWorld, Nov 5, 84, page 7]

( 2 ladies having tea in the 'parlor', chatting.  with a somewhat perplexed
  expression, one stares at a small dirt-pile on the carpet, while the
  obvious hostess explains with a smug grin:)

        "I thought he was a vacuum cleaner salesman.  He came in,
         sprinkled dirt on the carpet and then tried to sell me a
         software program that would show me how to clean it up.   "

------------------------------

From: gibson@unc.UUCP (Bill Gibson)
Subject: Recursive Riddle

               [Forwarded from Usenet by SASW@MIT-MC.]


   How many comedians does it take to tell a Light Bulb Joke ?

   Two - one to say,
   "How many comedians does it take to tell a Light Bulb Joke?
    Two - one to say,
    "How many comedians does it take to tell a Light Bulb Joke?
     Two - one to say,
     "How many comedians does it take to tell a Light Bulb Joke?
      Two - one to say,
      "How many comedians does it take to tell a Light Bulb Joke?
         ...
                 and one to ask nonsense riddles."
         ...
       and one to ask nonsense riddles."
      and one to ask nonsense riddles."
     and one to ask nonsense riddles."
    and one to ask nonsense riddles."
   and one to ask nonsense riddles.

 - from the parallel process of -     Bill Gibson

------------------------------

Date: Wed 24 Oct 84 19:13:16-PDT
From: Jean-Luc Bonnetain <BONNETAIN@SUMEX-AIM.ARPA>
Subject: minor correction on my msg on "badgorithms"

Afte  reading again the message, I *do* find interesting and unusual an
O(n↑(3/(pi↑3) - 1/e)) algorithm. I'd be real glad to see, and maybe even
touch, one.

------------------------------

Date: Thu, 25 Oct 84 07:38 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: worst algorithms as programming jokes

A very interesting idea, but "badgorithm" as a label should have been
strangled at birth.

How about "algospasm"?

Mark

------------------------------

Date: Thu, 25 Oct 84 08:14:31 cdt
From: "Duncan A. Buell" <buell%lsu.csnet@csnet-relay.arpa>
Subject: Bad Algorithms

Jean-Luc Bonnetain suggests worst algorithms (badgorithms) as programming
jokes.  In a similar vein, with interests in winning the Cold War by
shipping some of these to the Soviet Union, what is the slowest possible
way to sort a list of N items?  The only requirement should be one (this
problem may not be well-defined yet, but I'm sure people could produce
subproblems that were) to the effect that repetition of a state or
sequence of states should not take place, and that the method actually at
some future date sort the list.

As an example of how to think about this, consider generating the permutations
of N things, then comparing the existing list against each permutation.
How slowly, then, can we generate the permutations of N things?  We could
isolate one element, generate permutations of N-1 things, and then insert
the isolated element in N different places.  Ignoring the symmetry of the
situation, we could isolate a second element and continue (is this cheating
on the rule?).  And generating permutations of N-1 things?

------------------------------

Date: 25 Oct 84 09:58 PDT
From: Kahn.pa@XEROX.ARPA
Subject: Re: Badgorithms in AIList Digest   V2 #144

The examples of badgorithms that come to mind (including the sorting by
selecting an ordered permutation and find min and max by sorting, or for
that matter defining the last element of a list as CAR of the reverse of
the list or empty intersection by computing the entire intersection and
then seeing if its empty) all have in common that they are making use of
existing constructs that do what is desired and much more.  I think that
these are very reasonable PROGRAMS even if they normally correspond to
bad ALGORITHMS.
   The point is that various projects in program transformation
(especially partial evaluation) take as input such programs and
automatically transform them into programs that correspond to very
reasonable algorithms.  Also, true fans of logic programming who believe
that an algorithm = logic + control use sort as ordered permutation as
their classic example.  They add control anontations that cause the
permutation activity to be coroutined with the order selection.
  I'm looking forward to the day when one can write programs that if
interpreted naively correspond to badgorithms and yet  are either
tranformed automatically or interpreted cleverly enough so that they run
like a bat out of hell.

------------------------------

Date: 24 Oct 1984 10:35-EDT
From: MVILAIN at BBNG.ARPA
Subject: Seminar - Nonmonotonic Inference

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


"A Non-Monotonic Inference System"
James W. Goodwin, University of Linkoping.

BBN Laboratories, 10 Moulton St, Cambridge.
Third floor conference room, 10:30 AM.
Tuesday October 30th.


We present a theory and implementation of incomplete non-monotonic
reasoning. The theory is inspired by the success of inference systems
based on dependency nets and reason maintenance. The process of inference is
conceived as a monotonic accumulation of constraints on belief sets.
The "current database" is just the set of constraints accumulated so far;
the current beliefs are then required to be a set which satisfies all the
constraints in the current database, and contains no beliefs which are not
forced by those constraints. Constraints may also be thought of as reasons, or
as dependencies, or (best) simply as individual inference steps.

This approach allows an inference to depend on aspects of the current state
of the reasoning process. In particular, an inference may support P on the
condition that Q is not in the current belief set. This sense of
non-monotonicity is conveniently computable (by reason maintenance), so the
undecidability of Non-monotonic Logic I and its relatives is avoided. This
makes possible a theory of reasoning which is applicable to real agents, such
as computers, which are compelled to arrive at some conclusion despite
inadequate time and inadequate information. It supports a precise idea
of "reasoned control of reasoning" and an additive representation for control
knowledge (something like McCarthy's Advice Taker idea).

------------------------------

Date: 26 Oct 84 15:47:53 EDT
From: Ruth.Davis@CMU-RI-ISL1
Subject: Seminar - Mathematical Language

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Date:  Monday, October 29
Title:  PRL:  Practical Formal Mathematics
Speaker:  Joe Bates, Cornell University
Time:  1:30 pm
Location:  4605 WEH



             PRL: Practical Formal Mathematics
                     Joseph Bates
                  Cornell University

PRL is a family of development environments which are designed to
support the construction, validation, execution, and communication of
large bodies of mathematics text (eg, books on graph algorithms or
group theory).  The design of these systems draws on work in many
areas, from philosophy to Lisp hackery.  Tuesday, Constable will speak
on certain issues in the choice of PRL's mathematical language.  I
will present, in detail, the most significant aspects of the current
system architecture, and will suggest directions for future work.

------------------------------

Date: 26 Oct 1984  9:27:12 EDT (Friday)
From: Marshall Abrams <abrams@mitre>
Subject: Symposium - Expert Systems in the Government

I am helping to organize a Symposium on Expert Systems in the Federal
Government. In addition to papers, I am looking for people to serve on
the program committee and the conference committee, and to serve as
reviewers and session chairmen. The openings on the conference committee
include local arrangements, publicity, and tutorials.

Please contact me or the program chairman (or both by net-mail) with
questions and suggestions. The call for papers follows.

Call for Papers

Expert Systems in Government Conference

October 23-25, 1985

THE CONFERENCE objective is to allow the developers and implementers
of expert systems in goverenment agencies to exchange information and
ideas first hand for the purpose of improving the quality of
existing and future expert systems in the government sector.
Artificial Intelligence (AI) has recently been maturing so rapidly
that interest in each of its various facets, e.g., robotics, vision,
natural language, supercomputing, and expert systems, has acquired
an increasing following and cadre of practitioners.

PAPERS are solicited which discuss the subject of the conference.
Original research, analysis and approaches for defining  expert
systems issues and problems such as those identified in the
anticipated session topics, methodological approaches for analyzing
the scope and nature of expert system issues, and potential
solutions are of particular interest.  Completed papers are to be no
longer than 20 pages including graphics and are due 1 May 1985.
Four copies of papers are to be sent to:

Dr. Kamal Karna, Program Chairman
MITRE Corporation W852
1820 Dolley Madison Boulevard
McLean, Virginia  22102
Phone (703) 883-5866
ARPANET:  Karna @ Mitre

Notification of acceptance and manuscript preparation instructions
will be provided by 20 May 1985.

THE CONFERENCE is sponsored by the IEEE Computer Society and The
MITRE Corporation in cooperation with The Association for Computing
Machinery, The American Association for Artificial Intelligence and
The American Institute of Aeronautics and Astronautics National
Capital Section.  This conference will offer high quality technical
exchange and published proceedings.

It will be held at Tyson's Westpark Hotel, Tysons Corner, McLean,
VA, suburban Washington, D.C.


TOPICS OF INTEREST

The topics of interest include the expert systems in the following
applications domains (but are not limited to):

 1.  Professional:           Accounting, Consulting, Engineering,
                             Finance, Instruction, Law, Marketing,
                             Management, Medicine
                             Systems, Intelligent DBMS

 2.  Office Automation:      Text Understanding, Intelligent

 3.  Command & Control:      Intelligence Analysis, Planning,
                             Targeting, Communications, Air Traffic
                             Control

 4.  Exploration:            Space, Prospecting, Mineral, Oil

                             Archeology

 5.  Weapon Systems:         Adaptive Control, Electronic Warfare,
                             Star Wars, Target Identification

 6.  System Engineering:     Requirements, Preliminary Design,
                             Critical Design, Testing, and QA

 7.  Equipment:              Design Monitoring, Control, Diagnosis,
                             Maintenance, Repair, Instruction

 8.  Project Management:     Planning, Scheduling, Control

 9.  Flexible Automation:    Factory and Plan Automation

10.  Software:               Automatic Programming, Specifications,
                             Design, Production, Maintenance and
                             Verification and Validation

11.  Architecture:           Single, Multiple, Distributed Problem
                             Solving Tools

12.  Imagery:                Photo Interpretation, Mapping, etc.

13.  Education:              Concept Formation, Tutoring, Testing,
                             Diagnosis, Learning

14.  Entertainment and       Intelligent Games, Investment and
     Expert Advice Giving:   Finances, Retirement, Purchasing,
                             Shopping, Intelligent Information
                             Retrieval

------------------------------

End of AIList Digest
********************

∂28-Oct-84  0029	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #146    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Oct 84  00:29:37 PDT
Date: Sat 27 Oct 1984 22:10-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #146
To: AIList@SRI-AI


AIList Digest            Sunday, 28 Oct 1984      Volume 2 : Issue 146

Today's Topics:
  Report - CSLI Description,
  Linguistics - Indic Interlingua & Evolution & Shastric Sanscrit,
  Seminars - Knowledge and Common Knowledge & Gestalt Tutorial &
    AI and Real Life
----------------------------------------------------------------------

Date: Wed 24 Oct 84 18:33:02-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Institute Description - CSLI

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                        NEW CSLI REPORT

Report No. 16, ``The Center for the Study of Language and Information,'' has
just been published. It describes the Center and its research programs. An
online copy of this report can be found in the <CSLI> directory in the file
``Report-No-16.Online.'' In addition to this report, the <CSLI> directory
contains other valuable information about the Center and Turing.  To obtain
a printed version of Report No. 16, write to Dikran Karagueuzian, CSLI,
Ventura Hall, Stanford 94305 or send net mail to Dikran at Turing.

------------------------------

Date: Sun, 21 Oct 84 20:06:59 pdt
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Indic interlingua.

If I recall correctly, the continuing colloquy on Sastric Sanskrit was
motivated by the desire for a natural interlingua for machine trans-
lation.  Pardon my ignorance, but I do not see the efficacy of trans-
lating a language first into something like Sastric Sanskrit with its
concomitant declensional, conjugational and euphonic complexity, then
from there into the target language.  Are not less complex (and less
verbose) formalisms more appropriate, not being weighted with aesthe-
tic amenities and cultural biases?  If Sastric Sanskrit is otherwise
being offered as a paradigm for such a formalism, a more detailed in-
sight into its grammar is needed.

Another facet of the colloquy is its focus on ambiguity in the rela-
tionship of semantic elements (viz. words) in sentences.  There is
also the problem of determining unambiguously the meaning of a word,
when in natural languages words often have more than one meaning de-
pending on context.  Is Sastric Sanskrit unique in its vocabulary as
well as its grammar that each word has but one precisely circumscribed
meaning, and how eclectic and deep is this vocabulary?  Certainly the
professed unequivocality of the syntax is an aid to determining mean-
ings of the words whose interrelationship is thus well defined, but
it would seem preferable not to rely on context or on clumsy defining
clauses in an interlingua.

As an aside on ambiguity being requisite for a literature in a lan-
guage, I might profer two opinions.  A great writer is often charac-
terized by his ability to mold sentences which have an uncommon flui-
dity and expressivity -- would an unambiguous language allow such
freedom?  Great poetry invokes thoughts and emotions which defy written
expression through the use of rhythm and juxtaposition of disparate
images through words set in defiance of strict grammatical precepts.
Further, the beauty of prose or poetry lies in good part in the use
of ambiguity.  Especially in poetry, distilling many emotions into
a compact construction is facilitated by ambiguity, either semantic
or phonetic.  The beauty of poetry is a very different one from the
beauty of logic or mathematics.

                                                Harry Weeks
                                                (Weeks@UCBpopuli)

------------------------------

Date: Mon, 22 Oct 84 10:06 EDT
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: language evolution


Marcel Schoppers (AIList Digest V2 #143) seems to suggest that certain
dialects (e.g. those which include "Why he be leavin'?" and  "He ain't goin'
nowhere") are the result of forces which SIMPLIFY the grammar of a language:

     ".. my own theory is that the invasion of a language by foreign cultures,
     or vice versa, has a lot to do with how simple a language becomes:
     cross-cultural speakers tend to use only as much as absolutely necessary
     for them to consider themselves understood."

The analyses that I have seen show that such dialects are just as complex,
linguistically, as the standard dialect.  They are just complex in different
ways.  As I understand it, simplified PIDGIN languages quickly evolve into
complex CREOLE languages - all it takes is one generation of native speakers.

Tim

------------------------------

Date: Wed 24 Oct 84 23:23:56-PDT
From: Bill Poser <POSER@SU-CSLI.ARPA>
Subject: linguistics

        I would like to respond to several linguistic questions discussed
recently. First, in response to Rick Briggs re-assertion that Shastric Sanskrit
is a natural language, his claim that there was a literature written in it
and that it was in use for over three thousand years is simply irrelevant.
The same could perfectly well be true of an artificial language. There is
literature written in Esperanto, an artificial language which is also used
for scientific communication. It is perfectly possible that Esperanto will
remain with us for thousands of years. But we all know that it is an artificial
language. What makes it artificial is that it was consciously designed
by a human being-it did not evolve naturally.
        This leads to the question of whether Shastric Sanskrit is a natural
language. It looks like it isn`t. Rather, it is an artificial language
based on Sanskrit that was used for very limited purposes by scholars. I
challenge Rick Briggs to present evidence that (a) it was in use for anything
like 3000 years; (b) that anyone ever spoke it; (c) that even in written form
it was used extensively at any period; (d) that it was not always restricted
to scholars just as mathematical language is today.
        There has also been some speculation about the historical development
of languages. One idea presented is that languages evolve from morphologically
complex to morphologically simple. This is just not true. It happens to be
true of a number of the Indo-European languages with which non-linguists are
most familiar, but it is not true in general. Second, someone claimed that
the languages of "aboriginal people" (I assume he means "technologically
primitive") are complex and badly organized, and that languages evolve
as people become technologically more advanced. This was a popular idea
in the early nineteenth century but was long ago discarded. We know of no
systematic differences whatever between the languages spoken by primitive
people and those spoken by technologically advanced people. There is no
evidence that language evolves in any particular direction.
        Finally, Briggs mistakenly characterizes linguists as prescriptivists.
That is quite false. In fact, the prescriptivists are mainly English and
Literature people or non-academics like William Safire. Linguistics is
non-prescriptive by definition since we are interested in describing what
occurs in natural language and characterizing the possible natural languages.
        Finally (here comes a minor FLAME), why don't you guys read some
serious Linguistics books or ask a linguist instead of posting ignorant
speculation about linguistic issues? Some of us do Linguistics for a
living and there is extensive technical literature on many of these
questions. If I want to, say, know about algorithms I don't sit
around guessing. I look it up in a book on algorithms or ask a computer
scientist.
_

------------------------------

Date: Thu, 25 Oct 1984  00:09 PDT
From: KIPARSKY@SU-CSLI.ARPA
Subject: Even "shastric" Sanskrit is ambiguous

Take the example "Caitra is cooking rice in a pot". It is ambiguous in
both Sanskrit and English as to whether it is the rice that is in the
pot, or Caitra himself. Clearly the "shastric" paraphrase "There is an
activity subsisting in a pot..."  doesn't resolve this ambiguity. That
can only be done by distinguishing between subject- and object-
oriented locatives (which, incidentally, some natural languages do).
The reason why the Sanskrit logicians' paraphrases don't make that
distinction is that they follow Panini in treating locatives, like all
other karakas, simply as arguments of the verb.  In general, shastric
paraphrases, though certainly very explicit and interesting, are by no
means an "unambiguous language". What they make explicit about the
meanings of Sanskrit sentences is limited by the interpretations
assigned to those sentences by the rules of Panini's grammar.  This
grammar introduces only such semantic categories as are needed to
account for the distribution of Sanskrit grammatical formatives.  So
shastric paraphrases wind up leaving some of the ambiguities of the
corresponding ordinary Sanskrit sentences unresolved.

This sentence and its shastric paraphrase are ambiguous in other ways
as well, namely with regard to aspect ("cooks" or "is cooking"), and
definiteness ("the pot" or "a pot"). These categories don't play a
role here though they do in other areas of Sanskrit.  E.g.  the
generic/progressive distinction is important in derived nouns, where
English in turn ignores it: Sanskrit has two words for "driver",
depending on whether the activity is habitual/professional or not; a
shastric paraphrase might make the distinction explicit for such nouns.

The prevalence of this logicians' system of paraphrasing should not be
exaggerated, by the way. There is no evidence of it having been around
for anything like 3000 years(!), and it is not, to my knowledge, used in
any "literature" other than technical works on philosophy.

------------------------------

Date: Thu, 25 Oct 84 10:28 EST
From: Kurt Godden <godden%gmr.csnet@csnet-relay.arpa>
Subject: reply to schoppers@xerox

   'United States "English", being the lingo of the melting pot,
    will probably change faster than most.'

The historical linguists tell us that in fact when groups of speakers physically
move and establish a new language group, as has happened here in the US, that
the 'new' language dialect actually changes more slowly than the original
language group, in this case British English.  As simple evidence, witness the
fact of the diverse English dialects in the British Isles versus the far more
homogeneous regional dialects in the US.  There is also textual evidence from
poetry (rhythm, etc) showing that present day American English has preserved
the patterns of Middle English and early Modern English whereas present day
British English has changed.
-Godden@gmr

[Note that he availability of national radio and television broadcasts in
this century may be altering the evolution of modern dialects.  -- KIL]

------------------------------

Date: 25 Oct 1984 15:54-EDT
From: AHAAS at BBNG.ARPA
Subject: Seminar - Knowledge and Common Knowledge

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

There will be an AI seminar at 10:30 AM Friday November 2, in the
3rd floor large conference room. Abstract follows:


     Knowledge and Common Knowledge In Distributed Environments

                Yoram Moses, Stanford University


Knowledge plays a fundamental role in distributed environments.  An
individual in a distributed environment, be it a person, a robot, or a
processor in a network, depends on his knowledge to drive his
decisions and actions. When individuals' actions have an effect on one
another, it is often necessary that their actions be coordinated. This
coordination is acheived by a combination of having a predetermined
common plan of some kind, and communicating to expand and refine it.
The states of knowledge that are relevant or necessary in order to
allow the individuals to successfully carry out their individual plans
vary greatly according to the nature of the dependence of their plans
on the actions of others.

This work introduces a hierarchy of states of knowledge that a system may
be in.  We discuss the role of communication in ``improving'' the system's
state of knowledge of a given fact according to this hierarchy. The
strongest notion of knowledge that a group can have is Common Knowledge.
This notion is inherent in agreements and coordinated simultaneous actions.
We show that common knowledge is not attainable in practical systems, and
present a variety of relaxations of common knowledge that are attainable
in many cases of interest.  The reationship between these issues and
communication and action in a distributed environment is made clear through
a number of well known puzzles.

This talk should be of interest for people interested in distributed
algorithms, communication protocols, concurrency control and AI.  This work
is joint with Joe Halpern of IBM San Jose.

------------------------------

Date: Fri, 26 Oct 84 15:56:32 pdt
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Gestalt Tutorial

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

SPEAKER:        Steven E. Palmer, Psychology Department and
                Cognitive  Science Program, UC Berkeley

TITLE:           ``Gestalt Then and Now: A Tutorial Review''


TIME:                Tuesday, October 30, 11 - 12:30
PLACE:               240 Bechtel Engineering Center
DISCUSSION:          12:30 - 2 in 200 Building T-4

ABSTRACT:       I will present an overview of the nature and
                importance  of  the Gestalt approach to per-
                ception and cognition with  an  emphasis  on
                its  relation  to  modern  work in cognitive
                science. First I will discuss the nature  of
                the  contribution  made by Gestalt psycholo-
                gists in the  historical  context  in  which
                they worked.  Then I will trace their influ-
                ence on some current work in cognitive  sci-
                ence:  textural segmentation (Julesz, Beck &
                Rosenfeld), Pragnanz (Leeuwenberg,  Palmer),
                soap-bubble    systems   (Marr   &   Poggio,
                Attneave,  Hinton),  and  global  precedence
                (Navon, Broadbent, Ginsberg).


Beginning with this talk, the Cognitive Science Seminar will periodically
present tutorials as a service to its interdisciplinary audience.  Each
tutorial will review the ideas in some research area for workers outside

------------------------------

Date: Thu, 25 Oct 84 15:17:33 EDT
From: "Martin R. Lyons" <991@NJIT-EIES.MAILNET>
Subject: Seminar - AI and Real Life

                     ARTIFICIAL INTELLIGENCE AND REAL LIFE

     "Artificial Intelligence and Real Life", a talk by Paul Levinson of The
New School for Social Research, will be one of several topics discussed as
part of the Second Colloquium on Philospohy and Technology.  The event is
co-sposored by the Media Studies Program of the New School for Social Research
and the Philosophy & Technology Studies Center at the Polytechnic Institute of
New York.  The talk will be held at the New School's 66 W. 12th St. Building,
NYC, Monday November 12th, at 8pm, and the general public is invited.
Admission is free.

     I am passing this info on for Paul Levinson, the aforementioned speaker.
He can be reached directly at this site as:
Lev%NJIT-EIES.Mailnet@MIT-MULTICS.ARPA or
@MIT-MULTICS.ARPA:Lev@NJIT-EIES.Mailnet

     Please do not address inquiries to me, as all the info I have is above.

 MAILNET: Marty@NJIT-EIES.Mailnet
 ARPA:    Marty%NJIT-EIES.Mailnet@MIT-MULTICS.ARPA
 USPS:    Marty Lyons, CCCC/EIES @ New Jersey Institute of Technology,
          323 High St., Newark, NJ 07102    (201) 596-2932
 "You're in the fast lane....so go fast."

------------------------------

End of AIList Digest
********************

∂31-Oct-84  0030	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #147    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 31 Oct 84  00:30:05 PST
Date: Tue 30 Oct 1984 22:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #147
To: AIList@SRI-AI


AIList Digest           Wednesday, 31 Oct 1984    Volume 2 : Issue 147

Today's Topics:
  LISP - Function-itis & Comparison with C,
  Algorithms - Pessimal Algorithms & Real Programmers,
  Seminars - Robot Navigation & Accessibility of Analogies & Student Models
----------------------------------------------------------------------

Date: Sun 28 Oct 84 17:40:10-PST
From: Shawn Amirsardary <SHAWN@SU-SCORE.ARPA>
Subject: Lisp Function-itis

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Lisp with its very elegant  syntax suffers from acute function-itis.   When
adhering to the traditional  lispish style of using  very few setq and  god
forbid even  fewer progs,  you  end up  with about  a  million and  a  half
functions that get called from usually  only one place.  Of course LET  and
lambda help, but not that much.  My question is does anybody know of a good
method for ordering and perhaps even naming the little suckers?  In  pascal
you have to define  procedures before you  use them, but  the lack of  such
restrictions in lisp means that functions are all over the place.  What  is
the cure?

                                --Shawn

------------------------------

Date: Tue, 30 Oct 84 21:42:58 -0200
From: eyal@wisdom (Eyal mozes)
Subject: Re: different language types

  > But seriously, one can adopt a very abstract (i.e.
  applicative/functional) > programming style or a very imperative
  (C-like) style when using Lisp.  > On the other hand, adopting an
  applicative style in C is difficult (yes, > I've tried!).  So Lisp is
  certainly more versatile.

Really!! I've never yet seen an "imperative" non-trivial LISP program
which is not impossible to read, full of bugs which nobody knows how to
correct, and horribly time-consuming (most of UNIX's VAXIMA is a good
example of what I mean). Writing "imperative" style in LISP is a
programming equivalent of "badgorithms".

You can be as abstract as you want to be in C or Pascal.  I don't think
there is anything for which you can't come up with a good program in C,
if your writing style is good (and if it isn't, no language will help).
Of course, there are some activities, especially in some areas of AI,
which are made much easier by the functional style of LISP, and its
representation of programs as data - but even this wouldn't be true for
*all* AI systems. But in terms of versatility, I don't think there can
be much question about the big advantage of C, Pascal, and languages of
this type.

------------------------------

Date: 29 Oct 84 14:41 PST
From: JonL.pa@XEROX.ARPA
Subject: Pessimal Algorithms ("Badgorithms"?)

The following "badgorithm" comes from a practical joke, perpretated many
years ago; although it is not a naturally occuring "badgorithm", it does
have a humorous side.

In high school, I worked part-time for a university computing center **
programming on an IBM 650 (don't ask how many years ago!).  [One really
shouldn't pooh-pooh the 650 -- it was the world's first List processing
computer!  explication follows].  It's main memory consisted of 2000
10-digit decimal words, stored on a rotating magnetic drum; there were
40 rotational channels on the drum, with each channel holding 50 words
(one complete revolution).  Since the various instructions took a
variable amount of time, it would be most unwise to sequence the
instructions by merely incrementing each instruction address by 1, for
an instruction which took more time than that which would elapse between
two successive words in a "channel" would thus be blocked for one full
drum rotation time.  An instruction consisted of an operator, an operand
address, and a "next instruction address" (i.e., a CDR link in Lisp
parlance); thus one could assign the sequenceing of instructions to be
"optimal" in that the successor of an instruction at word offset A (mod
50) would be A+n (mod 50) where n is the time in, fiftieths of a drum
rotation, required for the instruction exection of the operator stored
at A.

The IBM 704/709 series had a machine language assembler called SAP, for
"Symbolic Assembly Program"; the 650 had SOAP, for "Symbolic Optimal
Assembly Program".  One would speak of "Soaping" a program, meaning to
assemble a symbolic deck of cards into a self-loading machine-language
deck.  My cohorts and I dubbed the obvious pessimizing version of SOAP
as SUDS, for "Symbolic Un-optimal Disassembly System" (it would assign
the "next instruction" to a word offset just 1 short of optimal, and
thus would slow down the resultant object code by up to a factor of 50).

As a gag, we SUDS'd the SOAP deck, and left it for others to use.
Imagine the consternation when a program that normally took 10 minutes
to assemble suddenly began taking over an hour!  Of course, we were
quickly found out, and SUDS was relagated to the circular hacks file.

-- Jon L White --

------------------------------

Date: 22 Oct 84 16:18:41 EDT
From: Michael.Jones@CMU-CS-SPICE
Subject: Real Programmers

           [Excerpted from the CMU bboard by Laws@SRI-AI.]

[I regret having to truncate this, but the original was too long to
distribute on AIList.  I have decide to proceed with the following
because it fits in with other recent AIList messages.  -- KIL]


     A recent article devoted to the *macho* side of programming
     made the bald and unvarnished statement:

                Real Programmers write in Fortran.

     [...]
     I feel duty-bound to describe,
     as best I can through the generation gap,
     how a Real Programmer wrote code.
     I'll call him Mel,
     because that was his name.

     I first met Mel when I went to work for Royal McBee Computer Corp.,
     a now-defunct subsidiary of the typewriter company.  [...]
     Mel's job was to re-write
     the blackjack program for the RPC-4000.  [...]
     The new computer had a one-plus-one
     addressing scheme,
     in which each machine instruction,
     in addition to the operation code
     and the address of the needed operand,
     had a second address that indicated where, on the revolving drum,
     the next instruction was located.
     In modern parlance,
     every single instruction was followed by a GO TO!  [...]

     Since Mel knew the numerical value
     of every operation code,
     and assigned his own drum addresses,
     every instruction he wrote could also be considered
     a numerical constant.
     He could pick up an earlier "add" instruction, say,
     and multiply by it,
     if it had the right numeric value.
     His code was not easy for someone else to modify.

     I compared Mel's hand-optimized programs
     with the same code massaged by the optimizing assembler program,
     and Mel's always ran faster.
     That was because the "top-down" method of program design
     hadn't been invented yet,
     and Mel wouldn't have used it anyway.
     He wrote the innermost parts of his program loops first,
     so they would get first choice
     of the optimum address locations on the drum.
     The optimizing assembler wasn't smart enough to do it that way.

     Mel never wrote time-delay loops, either,
     even when the balky Flexowriter
     required a delay between output characters to work right.
     He just located instructions on the drum
     so each successive one was just *past* the read head
     when it was needed;
     the drum had to execute another complete revolution
     to find the next instruction.  [...]
     Mel called the maximum time-delay locations
     the "most pessimum".  [...]

     Perhaps my greatest shock came
     when I found an innocent loop that had no test in it.
     No test. *None*.
     Common sense said it had to be a closed loop,
     where the program would circle, forever, endlessly.
     Program control passed right through it, however,
     and safely out the other side.
     It took me two weeks to figure it out.

     The RPC-4000 computer had a really modern facility
     called an index register.
     It allowed the programmer to write a program loop
     that used an indexed instruction inside;
     each time through,
     the number in the index register
     was added to the address of that instruction,
     so it would refer
     to the next datum in a series.
     He had only to increment the index register
     each time through.
     Mel never used it.

     Instead, he would pull the instruction into a machine register,
     add one to its address,
     and store it back.
     He would then execute the modified instruction
     right from the register.
     The loop was written so this additional execution time
     was taken into account --
     just as this instruction finished,
     the next one was right under the drum's read head,
     ready to go.
     But the loop had no test in it.

     The vital clue came when I noticed
     the index register bit,
     the bit that lay between the address
     and the operation code in the instruction word,
     was turned on--
     yet Mel never used the index register,
     leaving it zero all the time.
     When the light went on it nearly blinded me.

     He had located the data he was working on
     near the top of memory --
     the largest locations the instructions could address --
     so, after the last datum was handled,
     incrementing the instruction address
     would make it overflow.
     The carry would add one to the
     operation code, changing it to the next one in the instruction set:
     a jump instruction.
     Sure enough, the next program instruction was
     in address location zero,
     and the program went happily on its way.

     I haven't kept in touch with Mel,
     so I don't know if he ever gave in to the flood of
     change that has washed over programming techniques
     since those long-gone days.
     I like to think he didn't.
     In any event,
     I was impressed enough that I quit looking for the
     offending test,
     telling the Big Boss I couldn't find it.  [...]
     I didn't feel comfortable
     hacking up the code of a Real Programmer."


         -- Source: usenet: utastro!nather, May 21, 1983.


[The Cray is so fast it can execute an infinite loop in three minutes?
This machine might beat it!  -- KIL]

------------------------------

Date: 28 Oct 1984  14:43 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Robot Navigation

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


Wednesday, October 31;      4:00pm;     8th Floor Playroom

Navigation for Mobile Robots

Rodney A. Brooks

There are a large number of interesting questions in how to build a
mobile robot capable of navigating through unknown surroundings in
order to complete some desired task. Issues include obstacle avoidance
using local observations, overall path planning, registration with a
map and building a map from observations. There is a lot of ongoing
and promising work on the first two of these problems. Less has been
done on the last two.  Registration work has been most succesful with
detailed a priori maps in two domains: (1) indoors uncluttered areas
with flat walls giving unambigous geometric clues, and (2) areas with
reliably identifiable and accurately locatable landmarks visible over
a large area.  Re-registration with maps generated from a robot's own
observations has mainly been successful in two modes: (1) incremental
re-registration involving small motions from a known location, or (2)
in an environment with active beacons providing reliably indentifiable
and locatable landmarks.

This talk focus on some of the issues in building a map from
unreliable observations and in re-registering the robot to that map
much later, again using unreliable observations. In particular we
consider a new map represention, the requirements on the
representations of the world produced by vision, the role of
landmarks, and whether other sensors such as compasses or inertial
navigation systems are needed.

COMING SOON: Kent Pitman [Nov 7], Ryszard Michalski [Nov 14],
             Phil Agre   [Nov 28]

------------------------------

Date: 29 Oct 1984 14:10-EST
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Accessibility of Analogies

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


"Mental Models of Electricity"

Yvette Tenney, BBN Laboratories
Hermann Hartel, University of Kiel, West Germany

BBN Laboratories, 10 Moulton St, Cambridge.
Third floor large conference room, 10:30 AM.
Monday November 5th.


The presentation will consist of two short talks that were part of a
conference on Representations of Students' Knowledge in Electricity and
the Improvement of Teaching, held in Ludwigsburg, Germany this fall.

Talk 1:  Yvette Tenney (in collaboration with Dedre Gentner)
         "What makes analogies accessible:  Experiments on the
         water-flow analogy for electricity."

         In analogy, knowledge can be transferred from a known
         (base) domain to a target domain, provided the learner
         accesses the analogy.  We used the water-electric current
         analogy to test the hypothesis that prior familiarity
         with the base domain (Experiment 1) and pre-training
         on the base domain (Experiment 2) increase the
         likelihood of noticing the analogy.  Results showed
         that greater knowledge of the base domain did not
         increase accessibility, although it did increase the
         power of the analogy if detected.

Talk 2:  Hermann Hartel
         "The electric circuit as a system:  A new approach."
         [...]

------------------------------

Date: Tue 30 Oct 84 09:28:59-PST
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Student Models

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, November 2, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Derek Sleeman
             School of Education & HPP

ABSTRACT:    The PIXIE Project: The Inference and Use of Student
             (user) Models

For a decade or more the importance of having accurate student models
to guide Intelligent Tutoring Systems (ITSs) has been stressed.  I
will give an overview of the several types of models which have been
inferred and will talk in some detail about a system which infers
overlay models and Pixie which uses process-orientated models.
Currently, all these techniques effectively determine whether the
current user's behaviour falls within a previously defined
model-space.  The focus of some current work is to see whether these
techniques can be extended to be more data-sensitive.  (Analogous
issues arise when an ITS or ES is attempting to reason with an
incomplete database.)

Issues which arise in the use of models to control (remedial)
dialogues will be addressed.

The seminar will conclude with an overview of the fieldwork shortly to
be undertaken.  PIXIE now runs on a PC (in LISP) and several of these
machines will be used to "diagnose" the difficulties which high school
students have with Algebra and maybe Arithmetic.  It is envisaged that
PIXIE will be used to screen several classes, and that the class
teachers will remediate students on the basis of the diagnostic
information provided by PIXIE.  These sessions will then be analyzed
to determine how "real" teachers remediate; remedial subsystem(s) for
PIXIE will then be implemented.




Paula

------------------------------

End of AIList Digest
********************

∂01-Nov-84  1138	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #148    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 Nov 84  11:37:25 PST
Date: Thu  1 Nov 1984 09:35-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #148
To: AIList@SRI-AI


AIList Digest            Thursday, 1 Nov 1984     Volume 2 : Issue 148

Today's Topics:
  Linguistics - Bibliography Request,
  AI Tools - Workstations under $50K,
  News - IJCAI Awards,
  Linguistics - Sastric Sanscrit,
  Conference Review - Southern California AI Society,
  Seminar - Coherence in Life Stories
----------------------------------------------------------------------

Date: 30 Oct 1984 20:15:33 EST
From: Miroslav Benda <BENDA@USC-ISI.ARPA>
Subject: linguistics bibliography

Several years ago, Gazdar & Klein published a "Bibliography of Contemporary
Lingustic Reasearch", which was an indexed guide to papers and books on
generative linguistics.  Is there anything similar online somewhere?
Something that is kept up to date (and not several years behind, like
"Bibliografie Linguistique").

Miroslav Benda
Boeing Computer Services

------------------------------

Date: Wed, 31 Oct 84 13:45:48 pst
From: (Marvin Erickson [pnl]) erickson@lbl-csam
Subject: AI Workstations under $50K

I am interested in comments on the performance of low cost (under $50K) AI
Workstations.  Applications include expert system development and Landsat
image processing.  I am particularly interested in the availability of tools
for either application that run under common Lisp on a PERQ and provide
object oriented capabilities in addition to Lisp.
Ron Melton
Battelle/Pacific Northwest Laboratory
(509) 375-2932
erickson@lbl-csam

------------------------------

Date: Tue, 30 Oct 84 10:43:32 pst
From: Alan Mackworth <mack%ubc.csnet@csnet-relay.arpa>
Subject: IJCAI Awards

               The IJCAI Award for Research Excellence

          The Board of Trustees of  International  Joint  Confer-
     ences  on  Artificial Intelligence Inc. is proud to announce
     the establishment of The IJCAI Award for Research Excellence
     to  honour  sustained  excellence in Artificial Intelligence
     research.  The Award will be made every second year, at  the
     International  Joint  Conference on Artificial Intelligence,
     to a scientist who has carried out a program of research  of
     consistently   high  quality  yielding  several  substantial
     results.  If the research program has been carried out  col-
     laboratively  the  award may be made jointly to the research
     team.

          The Award carries with it a certificate and the sum  of
     $1,000  plus  travel and living expenses for the IJCAI.  The
     researcher(s) will be invited to deliver an address  on  the
     nature and significance of the results achieved.  Primarily,
     however, the award carries the honour of having  one's  work
     selected by one's peers as an exemplar of sustained research
     in the maturing science of Artificial Intelligence.

          We hereby call for nominations for The IJCAI Award  for
     Research  Excellence  to be made at IJCAI-85 in Los Angeles.
     The  accompanying note on  Selection  Procedures  for  IJCAI
     Awards provides the relevant details.


                   The Computers and Thought Award

          The Computers and Thought  Lecture  is  given  at  each
     International Joint Conference on Artificial Intelligence by
     an outstanding young scientist in the  field  of  artificial
     intelligence.  An award of $1,000 and payment for travel and
     subsistence expenses are provided to the recipient.  Nomina-
     tion  and  selection  procedures  are  outlined  in the note
     Selection Procedures for IJCAI Awards.  The Lecture is given
     one evening during the Conference, and the public is invited
     to attend.  The Lectureship was established  with  royalties
     received  from  the  book  Computers  and Thought, edited by
     Feigenbaum and Feldman;  it is currently supported by income
     from IJCAI funds.

          Past recipients of this honour have been Terry Winograd
     (1971), Patrick Winston (1973), Chuck Rieger (1975), Douglas
     Lenat (1977), David Marr (1979), Gerald Sussman  (1981)  and
     Tom Mitchell (1983).

          Nominations are invited for The Computers  and  Thought
     Award  to  be  made  at IJCAI-85 in Los Angeles. The note on
     Selection Procedures for IJCAI Awards covers the  nomination
     procedures to be followed.


                Selection Procedures for IJCAI Awards

          Nominations for The Computers and Thought Award and The
     IJCAI  Award for Research Excellence are invited from all in
     the Artificial Intelligence  international  community.   The
     procedures are the same for both awards.

          There should be a nominator and a  seconder,  at  least
     one  of whom should not have been in the same institution as
     the nominee.  The nominee must agree to be  nominated.   The
     nominators should prepare a short submission less than 2,000
     words for the voters, outlining the nominee's qualifications
     with respect to the criteria for the particular award.

          The award selection committee is the union of the  Pro-
     gram,  Organizing  and Conference Committees of the upcoming
     IJCAI and the Board of  Trustees  of  IJCAII  with  nominees
     excluded.   Nominations should be submitted before March 31,
     1985 to the IJCAI-85 Conference Chair:

                    Alan Mackworth
                    Department of Computer Science
                    University of British Columbia
                    Vancouver, B.C. V6T 1W5
                    Canada                  Tel. (604) 228-4893

                    Net Addresses   CSnet:   mack@ubc
                                    ARPAnet: mack%ubc@CSNet-Relay
                                    UUCP:    mack@ubc-vision

------------------------------

Date: 29 Oct 1984 10:04-PST (Monday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Re: AIList Digest   V2 #146


        I have been challenged to defend some of my recent assertions.
Bill Poser should be more careful when he criticizes
("Finally, Briggs mistakenly characterizes linguists as
prescriptivists" -- I said the exact opposite on AIList of Thursday Oct.
18, that is that lingiustics has become descriptive rather than
prescriptive: my own humble opinion is that non-prescriptive linguistics
will be the death of english).
        With regards to machine translation, the "aesthetic amenities"
could be an advantage rather than a disadvantage, since it might be
possible to encode poetic constructions in the interlingua, otherwise
many subtleties will be lost in the translation.  The Sanskrit scholars
have done a lot of work in formulating a mechanism for expressing
natural language entities unambiguously.  All I am saying is that it
would be unwise to sweep under the carpet millenia (yes, millenia) of
research without attempting to learn soemthing from it.
        Word ambiguity exists in Classical Sanskrit but is not a serious
problem in the Sastra, since the level of representation of meaning
is usually below the word level.  While Caitra is Caitra, cook is
a process of softening etc.  By going one level of representation
deeper, ambiguity between two possible meanings of the same word
is avoided.
        The use of Sastric Sanskrit can be dated back at least as far as
Patanjali's Mahabhashya (1st millenia B.C.).  The tradition continued through
Bhartrhari (the Vakyapadiya), Kaundabhatta, Dikshita (Vaiyakarana-
bhusanasara) and (in the 19th century) Nagesha
(Vaiyakaranasiddhantamanjusa).  That it was spoken is evidenced from the
fact that many Sastric works are actually transcripts of long dialogues
between the different "schools" (e.g. the grammarians and the logicians).
Their arguments were expressed in Sastric Sanskrit.  Arguing about
whether or not it was actually spoken is similar to  asking the same
of the Platonic dialogues.  Admittedly, its use was limited to
the scientific community to a large extent.  The same can be said about
the type of language used in today's scientific community, with its
own specialized jargon and style.  Is Mr. Poser suggesting that
this also is not a natural language?
        I do not understand exactly what Kiparsky means when he asserts
that there is ambiguity in whether or not Caitra or rice is in the pot.
What resides in the pot is a "locality", which as an object "rice".
Caitra is the agent of that activity; in no way can he be construed to
be in the pot.  Nothing is said about where Caitra is, I suppose he
could be in the pot, but the notion of unspecified slots being filled
in by defaults would be used.  Normally, the agent of cooking is not
in the pot and if he were it would probably be explicitly specified.
        With regards to definiteness ("the pot" or "a pot"):  "pot" is
defined as that which has potness (ghatatvam) in it.   More exactly,
a pot (or any other object) is defined by three terms (the determinant
of meaninghood (shaakyatavacchedaka) is made up of three elements). The first
is the genus (potness), the second is the form (or akrti) ("having a
conch-like appearance about its neck [kambugrivadimatvam]"), and
the third is the individual (pot "ghata").
        I think that much of the confusion results from too close
a correspondence being assumed from Classical to Sastric Sanskrit.
Much of so called "ambiguity" does not exist as the words themselves
are discarded for deeper representation.  Syntactic cases are also
changed when they are expressed as semantic cases since "over a fire"
can mean "by means of a fire" in the case of cooking.
        Let me state exactly what Sastric Sanskrit is: it is
"The most sophisticated stage of the developement of Sanskrit
(through Vedic, Classical etc.) in which a very sophisticated
philosophy of "the meaning of a sentence" was developed, and in
which unambiguity was strived for and obtained to a large extent."
By large extent I mean that it is ambiguous as a description
in semantic nets (say conceptual dependency), in fact it is more
precise.  What I suggest is that the Linguistic and AI community,
and especially those who are involved in both, take a very close
look at the Sastric methodology and its philosophy, with natural
language processing in mind.  They did
much research in how the mind perceives the meaning of words, and
it is surprising how little exposure it has gotten.

Rick Briggs

------------------------------

Date: Tue, 30 Oct 84 10:18:47 PST
From: Scott Turner <srt@UCLA-LOCUS.ARPA>
Subject: SCAIS Review


  The first meeting of the Southern California AI Society was a major success,
with over 200 people from all walks of life (and industry too :-) attending.
The event was held at the Faculty Center at UCLA, and arrangements were
very comfortable.

  The agenda included almost 8 hours of talks by over 50 speakers.  This
rather long format was intended to allow all the participants to become
familar with AI activities all around Southern California, but the great
length proved to be a drawback.  By the end of the day the crowd had thinned
considerably.

  Most of the talks were short overviews of ongoing work, but among the
more interesting talks were Rogers Hall of UC Irvine, "Learning in Multiple
Knowledge Sources", Erik Mueller of UCLA "Daydreaming and Story Invention",
and Chuck Williams of Inference Corp., "ART:  Automated Reasoning Tool."

  A short business meeting was held after the talks were finished, where
preparations for IJCAI-85 were discussed and an interim governing board for
SCAIS was selected (i.e., people volunteered).  In all likelihood future
SCAIS meetings will occur monthly or bi-monthly at rotating hosts.  Each
host will showcase their AI acitivities and invite speakers on a selected
topic.  This format will give SCAIS members a chance to visit all the local
AI Labs over the course of the year, without unduly straining the capacity of
any single Lab.

  After the business meeting there was a demonstration session in the UCLA
AI Lab, hosted by the infamous UCLA Airheads.  Erik Mueller demonstrated his
Daydreamer, Sergio Alvardo demonstrated OpEd, a program that models reading
editorials, Uri Zernik demonstrated GATE, the UCLA Graphical AI Tools
Environment, Charlie Dolan demonstrated Aesop, a program that learns planning
knowledge from reading Aesop's fables, and a number of other students
demonstrated other software and current work.

    Scott R. Turner
    UCLA Computer Science Department
    3531 Boelter Hall, Los Angeles, CA 90024
    ARPA:  srt@UCLA-LOCUS.ARPA
    UUCP:  ...!{cepu,ihnp4,trwspp,ucbvax}!ucla-cs!srt

------------------------------

Date: Wed, 31 Oct 84 17:33:44 pst
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Coherence in Life Stories

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

    TIME:                Tuesday, November 6, 11 - 12:30
    PLACE:               240 Bechtel Engineering Center
    DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Charlotte Linde, STRUCTURAL SEMANTICS

TITLE:          ``The Creation of Coherence in Life Stories:
                Commonsense  Philosophy and Special Explana-
                tory Systems''

This talk reports on a study of the creation  of  coherence
in oral life stories.  Such coherence is not a property of a
particular life, but rather an achievement of the speaker in
constructing  the story.  Studying the creation of coherence
permits us to examine the  implicit  assumptions  which  are
made  about the nature of socially accepted reasons for life
decisions.  For example, when a  speaker  tells  us  how  he
became  an optometrist, the way he makes this story coherent
can give us insight into folk beliefs about  proper  causes,
the nature of accident, etc.

The first level of the creation of coherence is the level of
implicit,  commonsense philosophical categories, such as, in
English, causality, accident, continuity and  discontinuity.
Speakers  must  show that their lives exhibit proper reasons
for major decisions.  If they can not frame their stories as
exhibiting  such  causality,  they must then analyze them as
involving accident or discontinuity.  Stories about accident
or  discontinuity  tend  to  be  organized  to show that the
accident is purely local, that is, that only one small  part
of  an  otherwise  well-motivated life is accidental.  Simi-
larly, discontinuity is managed by a variety of  strategies,
such  as  discontinuity  as  only apparent, discontinuity as
sequence, and discontinuity as  metacontinuity.   All  these
strategies  work  to  show that the speaker's life is not as
discontinuous as it might look.

A more complex level of coherence is the level  of  explana-
tory  systems.   These  are  non-expert  versions of various
expert systems in the  culture,  such  as  popular  Freudian
theory,  behaviorism,  feminism, and astrology.  The systems
at this level all presuppose the categories of the  previous
level.  That is, they all assume the existence of causality,
but specify possible causes  which  are  somewhat  different
from the causes permitted by the commonsense system.

------------------------------

End of AIList Digest
********************

∂05-Nov-84  1145	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #149    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 Nov 84  11:45:38 PST
Date: Mon  5 Nov 1984 10:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #149
To: AIList@SRI-AI


AIList Digest             Monday, 5 Nov 1984      Volume 2 : Issue 149

Today's Topics:
  AI Societies - SCAIS,
  AI Tools - LISP/PROLOG Availability & CAI authoring systems & OPS5 examples,
  AI Literature - Technical Publication Addresses & Linguistics Bibliography,
  Programming - Malgorithms & Programming Style
----------------------------------------------------------------------

Date: Thu, 1 Nov 84 12:34:55 PST
From: Scott Turner <srt@UCLA-LOCUS.ARPA>
Subject: SCAIS

Before I personally get flooded with mail, please send all request
concerning joining SCAIS, etc., to scais-request@ucla-locus.

    Scott R. Turner
    UCLA Computer Science Department
    3531 Boelter Hall, Los Angeles, CA 90024
    ARPA:  srt@UCLA-LOCUS.ARPA
    UUCP:  ...!{cepu,ihnp4,trwspp,ucbvax}!ucla-cs!srt

------------------------------

Date: 2 Nov 1984 14:06-EST
From: CDR Jeff Ackerson (ACKERSON@USC-ISI)
Subject: LISP/PROLOG Availability

Would like to know if anyone knows of availability of either a
LISP or PROLOG environment that will run on an Altos 586-40.
Currently running Xenix.

------------------------------

Date: 2 Nov 84 13:16-EDT (Fri)
From: Malmros (Fs Hill) <malmros%umass-ece.csnet@csnet-relay.arpa>
Subject: CAI authoring systems

     Perhaps someone out there can help me.  I'm looking for a CAI
authoring system and all the ones I've seen so far have been absolute
dogs.  They're pedagogically simplistic and they're all geared to the
same old tutorial/drill-practice kind of application.  Does anyone know
of something more exciting that would be of sufficient pedagogical
quality for use at the college level?  I'm writing to AILIST because I
don't know where else to go.  My address is:

     malmros.umass-ece@csnet-relay.arpa

     thanks very much.

------------------------------

Date: 4 Nov 1984 10:01:09 EST (Sunday)
From: Charles Howell <m15434@mitre>
Subject: OPS5 examples


I am developing a small KBS using OPS5.  My first goal (if you'll
pardon the expression...) is simply to "get up to speed" on OPS5.
When learning other languages/systems, I've found examples to  be
very  helpful.   Does  anyone  have  any examples of working OPS5
systems that they can send me or give me pointers to?   If  there
is  much  response,  I'll be happy to collect them and distribute
them (or collect and distribute pointers, as the  case  may  be).
If  you  have an example that you wouldn't mind my using (the KBS
is for a graduate course in  AI)  but  you  don't  wish  to  have
distributed,  I'll  of course not include it in the distribution.
I hope my system will be a bit more stable from now on,  so  that
the turnaround on distributing the OPS5 examples isn't as long as
it was for the technical publications addresses...

Thanks very much,
Chuck Howell
The MITRE Corporation

------------------------------

Date: 4 Nov 1984  9:48:08 EST (Sunday)
From: Charles Howell <m15434@mitre>
Subject: Technical Publication addresses


A  month  or  so  ago,  I  posted  a  request  for  addresses  of
institutions publishing technical reports related to AI.  Several
people responded; thanks! Several people also requested a copy of
the  collected  list  of  addresses.   Unfortunately,  the file I
collected these messages in has been destroyed, along with a  lot
of  my other files... and, of course, the most recent backup that
is usable predates the creation of this file.   If you would
like  a  copy  of this list, please let me know.  I apologize for
the delay in responding to those who already sent such a request.

Chuck Howell
The MITRE Corporation

------------------------------

Date: Fri 2 Nov 84 09:00:29-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Linguistics Bibliography

There is an index published by Sociological Abstracts which is called
Language and Language Behavior Abstracts.  I believe it is a quarterly
publication.  It is also an online database available on at least Dialog
and BRS.  I have not made a complete evaluation of these abstracts for
their relevance to computer science, AI etc.  However this week at the
Online conference in San Francisco I did learn that the index does
have a section on mathematical and computational linguistics.  A very
quick search with the descriptor term artificial intelligence came
up with 45 hits.  However much of the material was in the older part
of the database.  I was told there has been some changes in scope and
as of now I am not sure whether their scope has expanded in the areas
of interest of AIList readers or the scope has been limited.

Harry Llull, Computer Science Library Stanford University.

------------------------------

Date: 31 Oct 1984 08:36-CST
From: SAC.LONG@USC-ISIE.ARPA
Subject: Badgorithms


I have noticed much discussion on natural languages and AI processors
with difficulties in linguistical interpretations due to varied
meanings (?!?).  Well my PC (personal consideration) on the use of
'badgorithm' is that it is a poor construct from the similarity and
pronouncability of its original word, 'algorithm'.  In view of this
I would like to submit in its place a modified parsed version:

                           'badorithm'

It seems to me the substituted word is much simpler to pronounce and
has more of an audio similiarity to algorithm.  It is not my intent to
bring any discredit or defaming to the coiner of 'badgorithm', but
only to present what may be a more useful form of the original idea.
Of course, such things are a matter of personal taste to a great extent.

What do you think?

------------------------------

Date: Fri 2 Nov 84 14:28:15-CST
From: CMP.BARC@UTEXAS-20.ARPA
Subject: "badgorithms" vs. "algospasms"

I know this is an election year, but perhaps we need more than two choices
on some important issues.  How about "malgorithms" or is that too easy?

Dallas Webster

------------------------------

Date: Wed 31 Oct 84 09:55:03-MST
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Programming Style in Lisp and C

(Oh no, not more "programming style" flames!)

I'm a little puzzled why "imperative style" in Lisp is so much worse
than the same style in C or Pascal.  There's a difference between
abstraction and imperative code.  Last year I wrote a large quantity
of graphics C code, and attempted to make it functional as possible.
I ended up resorting to amazing numbers of preprocessor macros (is this
abstraction?  why not just use the macro processor (it's computationally
adequate) and dump the C compiler?) in order to get the polymorphic
functional style, and it was pretty messy.  Why should one have to tinker
with pointers explicitly, or allocate storage manually?  I have yet
to see a nontrivial C program that dispensed entirely with assignment
statements.

There are at least two reasons for doing "imperative style" in Lisp.
The first reason is to reduce the complexity of programs.  True functional
style requires that you *always* pass around all data that you will
ever use.  For instance, any I/O parameters can never be defaulted -
all of them will always have to be passed to the I/O functions.  I've
heard very few people actually advocating that in practice (although
many are quick to advocate in theory...).  Another way to put it is that
global variables are completely disallowed!  The second reason is that
Lisp kernels generally have to be coded for VN machines, and so their
code tends to look more "imperative" in nature (of course, I'm assuming
that one is doing Lisp-in-Lisp).  The PSL kernel definitely has an
atypical coding style, but of course you can't implement LAMBDA using
LAMBDA directly!

In any case, I've only seen a few Lisp programs that were totally
without side-effect operators, and those were small examples.  I'd
be interested to hear of a major system being done in a true functional
style (Steele's RABBIT compiler for Scheme is the closest candidate
I know).  Side effects have their place, albeit a rather small one...

                                                        stan shebs

------------------------------

Date: Wed 31 Oct 84 10:16:48-MST
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: What To Do With All Those Functions

I use lots and lots of small functions in Lisp programs also, and have
adopted a sort of semi-systematic depth-first scheme for ordering them
in a file.  That is, if A calls B and C, and B calls D, and C calls
E and F, then I put them in the order A B D C E F.  The rationale is
that B and D (for instance) form a unit, and should therefore be
grouped together.  If a function is used in several places, I usually
put it close to the first place.  If it's used in *lots* of places,
it's a utility, and therefore goes in a separate utilities file.
Files should be kept relatively small (<1000 lines), and should have
plenty of "separating" documentation that divides larger files into
several parts.

Don't think of functions as a burden;  they are an advantage.  There's
no limit on name lengths, and their cost is trivial, so you can name
them to be very mnemonic (such as "get-first-item-and-mung").  This
is a great aid for debugging.  Also, programs will be easier to modify
later on (and save scrolling work for the text editor!).

                                                        stan shebs

------------------------------

Date: Wed 31 Oct 84 14:18:32-MST
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Functionitis

I thought that there were several books answering questions of the sort
Shawn Amirsardary posed and that all decent Universities offered courses
that dealt with such questions.  The subject is called "programming
methodology".  Since the questions are age-old and so are the answers to
them, I will be brief.

What do you do with "million and a half functions that get called from
usually one place"?  You use "abstraction".  If you are trying to
understand a program by tracing it, you are NOT using abstraction.  In that
case you would naturally prefer SETQ's to functions.  But, if you know how
to use abstraction, you would hate SETQ's.

How do you order function definitions?  You organize them into "modules"
which are also called "classes", "forms", "clusters" or "packages" in
various contexts.  If you are using a state-of-the-art language, it should
support modules.  Otherwise, you can still organize the functions into
modules on your own.

------------------------------

Date: Fri 2 Nov 84 14:33:58-CST
From: CMP.BARC@UTEXAS-20.ARPA
Subject: Re: Lisp Function-itis

Would you consider trading your LAMBDA (or 3600 or Dandelion) for a NORMA
(Normal Order Reduction Machine Architecture), and using a purely functional
language that supports modules (SASL)?

Dallas Webster (CMP.BARC@UTexas-20)

------------------------------

End of AIList Digest
********************

∂07-Nov-84  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #150    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Nov 84  18:09:32 PST
Date: Wed  7 Nov 1984 15:47-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #150
To: AIList@SRI-AI


AIList Digest            Thursday, 8 Nov 1984     Volume 2 : Issue 150

Today's Topics:
  AI Tools - Prolog Availability,
  AI Education - Getting Started in AI & CAI Authoring,
  Linguistics - Interlinguae,
  Seminars - Knowledge Representation and Problem Solving & Vision &
    Knowledge Editing & Automatic Program Debugging,
  Conferences -  Software Maintenance & Security And Privacy
----------------------------------------------------------------------

Date: Mon, 5 Nov 84 21:22:53 mst
From: "Arthur I. Karshmer" <arthur%nmsu.csnet@csnet-relay.arpa>
Subject: Prolog Availability


Our vax-11/750 runninx UNIX 4.2 is newly installed and we would very much
like to locate PROLOG for it. We would appreciate any help in finding
a version of PROLOG for our system. Further, we are using a number of
DEC pro-350 systems under Venix/11. The version of PROLOG we currently
have for these systems is badly brain damaged - is there any help
available in this area?

------------------------------

Date: 6 Nov 84 09:32:25 PST (Tuesday)
From: cherry.es@XEROX.ARPA
Subject: Getting started in AI

I am looking for any pointers which may help me get started in LISP.
Utility programs, applications programs, etc. will be helpful so that I
can analyze the source to better understand what I am trying to
accomplish.  Most of the literature I have read on the topic of AI makes
the assumption that the reader is quite proficient in the LISP
environment.  While I'm not new to programming, the LISP environment is
new to me.

My purpose for utilizing AI will be as an engineering aid for product
yield improvement.

Cherry.es@Xerox.Arpa

------------------------------

Date: 5 November 1984 1311-PST (Monday)
From: psotka@nprdc
Reply-to: psotka@NPRDC
Subject: CAI Authoring

I too would like to hear about good CAI authoring systems.  Several
commercial systems that run on VAXen  CYBERs and other stuff are really
good for their purpose -- linear CAI.  The real question, it seems to me,
is how to use the marvelous computational power of personal
Lisp machines to do CAI authoring.  What kinds of facilities would one want?
Natural language interpreters; graphic simulation systems for rapid
prototyping; expert systems for explaining;  complex knowledge representation.
ETC.   Could such a system be designed now to produce instruction as
effective as one on one tutoring by an expert?  Would the author (the person
using the system to develop instruction) have to be an expert in the area
being taught (and an expert teacher, too)??


[For one viewpoint on nonlinear CAI, see Jacques Habenstreit's article,
Computers in Education: The French Experience (1970--1984), in the
Fall issue of Abacus.  -- KIL]

------------------------------

Date: Sat, 3 Nov 84 15:49:16 pst
From: Bill Poser <poser@SU-Russell>
Subject: Interlinguae

        I think that it is Rick Briggs who should read his own
writing more carefully. The relevant portion of Briggs' comment
runs as follows:

"Current Linguistics has begun to actually aid this entropy by paying
special attention to slang and casual usage (descriptive vs. prescriptive).
Without some negentropy from the linguists, I fear that English will
degenerate further."

The use of the inchoative "has begun" in the first sentence clearly
presupposes that Linguistics has hitherto been prescriptive.  (I.e.,
Linguists have only have just begun to pay special attention to slang
and casual speech; they have just begun to engage in descriptive, as
opposed to prescriptive, linguistics.)  So although it is quite true
that Briggs recognizes that there is now a descriptive element to
Linguistics, he is claiming (whether he intended to or not) that
Linguistics has been prescriptive and still is predominantly
prescriptive, and that it would be appropriate for linguists to be
more prescriptive. My point, which I believe still stands, was that
what we call Linguistics is not at all prescriptive and has not been
in the past.  Modern Linguistics (by which I mean Linguistics since
the mid-nineteenth century) is by definition not prescriptive.
Moreover, the traditions of prescriptive grammar and Linguistics have
been essentially independent for a very long time.
        Polemic aside, there is a real issue here. Briggs is
claiming that there is such a thing as degeneration of languages.
Now it is certainly true that some people use language more effectively
than others, whether we measure effectiveness in terms of aesthetics or
clarity or what. And it may be that the mean effectiveness of language use
over a population varies with time, e.g. as literacy rises and falls,
although I know of no objective demonstration of such a claim. But
that does not mean that the *language* degenerates--only that its use
degenerates. The issue is whether historical change in language results
in degneration of the language. This is certainly an empirical issue,
but I am not aware of any evidence that such degeneration takes place.
Features of one generation's casual style often become features of a
subsequent generation's formal style. There is just no evidence that
any historical stage of a language is less useful or more ambiguous
or whatever than any other. Different languages (and different social
and geographic dialects and historical stages of the same language)
differ in what information they present obligatorily or briefly,
but there is no evidence that there are statements that can be made
in one language that cannot be translated into another language, although
the expression of a given piece of information in one language may be
more or less cumbersome than in the other. In sum, while it is very
common for people to believe that their language is deteriorating and
look back to some golden age in which the language was just right,
the notion that there is such a thing as degeration of a language
(short of the special case of "language death" that sometimes
occurs when a language has only a few speakers left) is one that
has never been substantiated.
        Finally, to return to my challenge to Briggs to show that
Shastric Sanskrit is a natural language, he argues that the
existence of dialogues written in it demonstrates that it was spoken,
suggesting that raising the issue of whether this demonstrates that
it was actually spoken is equivalent to raising the issue of whether
the Platonic dialogues were actually spoken.
It is quite possible to write dialogues that never took place, and
moreover to write them in a style that would never have been used
in actual speech, so the existence of written dialogues in and of
itself is not compelling. In fact, if I am not mistaken, the Platonic
dialogues are not believed to be actual transcripts of spoken
dialogues. In the case of Greek we have lots of other evidence that
the language was spoken, and the language of the dialogues is not so
different from other forms of the language, so I would not argue that the
Platonic dialogues could not have been spoken. But Shastric Sanskrit
differs sufficiently from other forms of Sanskrit that one must consider
seriously the possibility that the dialogues written in it were actually
spoken. The existence of dialogues in the language certainly shows that
it had a broader semantics than, say, the language of mathematical discourse,
but it doesn't show that Shastric Sanskrit was actually a spoken language.
        But let's go one step further. Suppose that Briggs is right and
some people actually spoke Shastric Sanskrit, perhaps even all the time.
The mere fact that it could be spoken wouldn't mean that it wasn't artificial.
People speak Esperanto too. I reiterate: a language is artificial if it
was consciously designed by human beings. The use to which an artificial
language is put says nothing about its artificiality. (I'll back down
just a bit here. We should probably be willing to give a language status
as a natural language (in one sense) if, although it is the result
of conscious design, it is subsequently learned as a native language
by human children. This learnability would presumably show that the
language's properties are those of a natural language, although
it happens that it did not evolve naturally.)
        I still think that Shastric Sanskrit is an artificial derivative
of Sanskrit used for specialized scientific purposes, not a natural language.
Briggs asks whether I would deny the language of scientific discourse
the status of natural language. As I indicated in my very first message
on this topic, yes I would, at least the language of mathematics. The
language of mathematics is a specialized derivative of normal language
that contains special constructions that in some cases violate strong
syntactic constraints of the natural base. Consider the "such that"
construction in English mathematical language, for example.
        I suspect that it is pointless to quibble endlessly about
whether or not a given form of specialized language is natural
or not-we'll just end up worrying about at what point we say
that the specialized language departs sufficiently from its
source to differentiate them. But the real point, and the one that
I have been trying to make from the outset, is simple and, I
think, untouched. It is possible to create specialized languages based
on natural languages that are more precise, less ambiguous, etc., conceivably
even perfect in these respects, and therefore better candidates for
machine translation interlinguae, but there is no known natural language
which in its ordinary form has these properties.

------------------------------

Date: Fri 2 Nov 84 11:57:10-PST
From: Vineet Singh <vsingh@SUMEX-AIM.ARPA>
Subject: Seminars - Knowledge Representation, Problem Solving, Vision

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

A couple of researchers from IBM Yorktown will be at HPP next Thursday
(11/8/84).  They will present two short 20 minute talks starting at
10 am on distributed computing (AI and systems) research at their
research facility.  Anyone who is interested in listening to their
talks and/or talking to them should show up at that time.  Details are
given below:

Time: 10 am
Day: Thursday (11/8/84)
Place: Welch Road conference room (HPP, 701 Welch Rd., Bldg C)
Speakers: Sanjaya Addanki and Danny Sabbah
Abstracts:

*Abstract1*

Knowledge Representation and Parallel Problems Solving:

          While there has  been much research on  "naive sciences" and
          "expert  systems" for  problem-solving  in  complex domains,
          there is a large class of  problem solving tasks that is not
          covered by these efforts.  These tasks (e.g. intelligent de-
          sign in complex domains) require  systems to go beyond their
          high level rules into deeper levels of knowledge down to the
          "first principles"  of the  field. For example,  new designs
          often  hinge on  modifying  existing  assumptions about  the
          world. These  modifications cause changes in  the high level
          rules about the world.   Clearly, the processes of identify-
          ing the modifications to be made and deducing the changes to
          the rules require deeper levels of knowledge.

          We propose  a hierarchical,  prototype-based scheme  for the
          representation and interpretation of the different levels of
          knowledge  required by  an  intelligent  design system  that
          functions in a world of complex devices. We choose design as
          the target  task because it  requires both the  analysis and
          synthesis of solutions and thus covers much of problem solv-
          ing.  This work is a part of a larger effort in developing a
          parallel approach to complex problem solving.

*Abstract2*

Vision:

In this short overview of current interest in Computer Vision at Yorktown,
we will be discussing issues in:

    a) Incorporation of complex shape representation (e.g. Extended Gaussian
Images) into parallel visual recognition systems.
    b) Improvement of recognition behavior through the incorporation of
multiple sources of information (e.g. contour, motion, texture)
    c) A possible mechanism for focus of attention in highly parallel,
connectionist vision systems  (an approach to indexing into a large data
base of objects in such vision systems).

Detailed solutions will be sparse as the work is beginning and is just through
the proposal stage.  The issues, however, are relevant to any visual
recognition system.

------------------------------

Date: 5 Nov 1984  13:04 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Knowledge Editing

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


        Wednesday,  Nov 7       4:00pm      8th floor playroom

               CREF: A Cross-Referenced Editing Facility
                 for the Knowledge Engineer's Assistant


                             Kent M. Pitman


I will present a critical analysis of a tool I call CREF (Cross
Referenced Editing Facility), which I developed this summer at the Human
Cognition Research Laboratory of the Open University in Milton Keynes,
England. CREF was originally designed to fill a very specific purpose in
the KEA (Knowledge Engineer's Assistant) project, but appears to be of
much more general utility than I had originally intended and I am
currently investigating its status as a ``next generation'' general
purpose text editor.

CREF might be described as a cross between Zmacs, Zmail, and the Emacs
INFO subsystem. Its capabilities for cross referencing, summarization,
and linearized presentation of non-linear text put it in the same family
as systems such as NLS, Hypertext, and Textnet.

------------------------------

Date: Mon, 5 Nov 84 10:20:54 cst
From: briggs@ut-sally.ARPA (Ted Briggs)
Subject: Seminar - Automatic Program Debugging

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]


     Heuristic and Formal Methods in Automatic Program Debugging
                                  by
                          William R. Murray

                         noon  Friday Nov. 9
                               PAI 3.38

  I will discuss the implementation of an automatic debugging system for
pure LISP functions written to solve small but nontrivial tasks.  It  is
intended to be the  expert module of  an intelligent tutoring  system to
teach LISP.  The debugger uses both heuristic and formal methods to find
and correct bugs  in student  programs.   Proofs  of correctness  of the
debugged definitions are generated for  verification by the Boyer  Moore
Theorem Prover.

   Heuristic methods are used  in algorithm identification, the  mapping
of stored functions to student functions, the generation of verification
conditions, and in the localization  of bugs.   Formal methods  are used
in a  case  analysis  which  detects  bugs,  in  symbolic  evaluation of
functions, and in the verification of results.  One of the main roles of
the theorem prover is to represent intensionally an infinite database of
all possible rewrite rules.

 - Regards,
      Bill

------------------------------

Date: 3-Nov-84 21:33 PST
From: William Daul - Augmentation Systems - McDnD 
      <WBD.TYM@OFFICE-2.ARPA>
Subject: CALL FOR PAPERS - CONFERENCE ON SOFTWARE MAINTENANCE -- 1985

Conference On Softway Maintenance -- 1985

   Wahsington, D.C., Nov. 11-13

The conference will be sponsored by the Association For Women in Computing, the
Data Processing Management Association, the Institute for Electrical &
Electronics Engineers, Inc., the National Bureau of Standards and the Special
Interest Groups on Software Maintenance in cooperation with the Special Interest
Group on Software Engineering.

Papers are being solicited in the following areas:

   controlling software maintenance
   software maintenance careers and education
   case studies -- successes and failures
   configuration management
   maintenance of distributed, embedded, hybrid and real-time systems
   debugging code
   developing maintainance documentation and environments
   end-user maintenance
   software maintenance error distribution
   software evolution
   software maintenance metrics
   software retirement/conversion
   technololgy transfer
   understanding the software maintainer

Submission deadline is Feb. 4, and 5 double-spaced copies are required.  Papers
should range from 1,000 to 5,000 words in length.

The first page must include the title and a maximum 250-word abstract; all the
authors' names, affiliations, mailing addresses and telephone numbers; and a
statement of commitment that one of the authors will present the paper at the
conference if it is accepted.

Submit papers and panel session proposals to: Roger Martin (CMS-85), National
Bureau of Standards, Building 225, Room B266, Gaithersburg, Md. 20899

------------------------------

Date: 3-Nov-84 21:33 PST
From: William Daul - Augmentation Systems - McDnD 
      <WBD.TYM@OFFICE-2.ARPA>
Subject: CALL FOR PAPER -- 1985 Symposium On Security And Privacy

1985 Symposium On Security And Privacy

   Oakland, Ca., April 21-24

The meet is being sponsored by the Technical Committee on Security and Privacy
and the Institue Of Electrical & Electronic Engineers, Inc.

Papers and panel session proposals are being solicited in the following areas:

   security testing and evaluation
   applications security
   network security
   formal security models
   formal verification
   authentication
   data encryption
   data base secutity
   operating system secutity
   privacy issues
   cryptography protocols

Send three copes of the paper, an extended abstract of 2,000 works or panel
proposal by Dec. 14 to:

   J.K. Millen
   Mitre Corp.
   P.O. Box 208
   Bedford, Mass. 01730

Final papers will be due by Feb. 25 in order to be included in the proceedings.

------------------------------

End of AIList Digest
********************

∂09-Nov-84  1308	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #151    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Nov 84  13:04:56 PST
Date: Fri  9 Nov 1984 10:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #151
To: AIList@SRI-AI


AIList Digest             Friday, 9 Nov 1984      Volume 2 : Issue 151

Today's Topics:
  AI Hardware - Fujitsu Facom Alpha
  AI Literature - Journal of Intelligent Systems
    & Artificial Intelligence Markets & Machine Intelligence News Digest,
  Algorithms - Taxonomy and Uses of Malgorithms,
  Program Description - Social Impacts of Computing, UC-Irvine
----------------------------------------------------------------------

Date: Fri, 9 Nov 1984  13:20 EST
From: Chunka Mui <CHUNKA%MIT-OZ@MIT-MC.ARPA>
Subject: AI Hardware - Fujitsu Facom Alpha


In a recent issue of Electronic News (I think it was October), there
was an article on AI systems which was interesting.  After the usual
discussion about Symbolics, LMI, and Xerox lisp machines, the article
discussed a Fujitsu machine called the "facom alpha" which was priced
at 90K and which Gary Moskovitz of Xerox described as a "back-end
processor to a main frame."  Now it doesn't seem that 90K for a
back-end processor is much of a bargain, but I think the idea of a
very fast Lisp processing back end for a mainframe is worth looking
at.  To be able to use a 3600 or a Lambda as a development environment
but know that one could ultimately use a mainframe as the execution
environment would, I think, make big business look more kindly upon
potential AI projects.

Has anyone out there seen the Fujitsu machine or know anything about
it?  I like to hear whatever information, thoughts, rumors, etc.
people had on it.  If there is a Fujitsu person out there, I'd be
interested in hearing from you.

I'd also like to know what kind of thoughts people had on this topic:
lisp back ends for mainframes that can roughly compare with the
various lisp machines as oppose to single user work station that are
used now.  Is anyone working on such a thing here in the U.S.?

Thanks,

     Chunka Mui
     Chunka%mit-oz@mit-mc

------------------------------

Date: Mon 5 Nov 84 14:53:19-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Journal of Intelligent Systems

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

I have received very brief information on a journal to appear in 1985.
The Journal of Intelligent Systems will be published by Freund Publishing
House Ltd., London England for $120 per year, a quarterly.  The editors
are Frank George, Les Johnson, and Mike Wright of Brunel University,
Uxbridge England. The managing editor is Mrs. Alison Lovejoy, AA
publishing Services, London England.  The Aims and Scopes are described
as follows:

...to provide research and review papers on an interdisciplinary level, where
the focal point is the field of intelligent systems.  This field includes:
the empirical study and modelling of natural intelligent systems (human
beings and also relevant studies in evolutionary theory and biology);
the theoretical analysis of possible systems which could display intelligence,
the development and enhancement of intelligent systems (eg learning theories)
the designing of intelligent systems (or the application of intelligent systems
concepts to the design of semi-intelligent machines) and the philosophical
aspects of the field of intelligent systems.

It is believed that technological advances in such areas as robotics and
knowledge based systems are facilitated by interdisciplinary communcication.
Additionally, those sciences which are concerned with the understanding of
human intelligence stand to gain by such a dialogue.

In keeping with the interdisciplinary intent of the journal, papers will be
written for general professional readership.  It is therefore important
that technical jargon should be avoided, or it used , shld be made
explicit...........


An editorial board of 20 is being formed at present.  If anyone has any
information or opinions about this publication, please let me know.
Does it sound like something I should order for the Math/CS Library?

Harry Llull

------------------------------

Date: Wed 7 Nov 84 11:24:31-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Artificial Intelligence Markets

I just got a flier from AIM Publications, P.O. Box 156, Natick, MA 01760.
They are planning a newsletter, Artificial Intelligence Markets, to
track the AI business world starting in January 1985.  The price is
$255 regular, $195 charter, $380 2-year, and $550 3-year for 12 issues
per year of eight pages each.

The flier claims this to be the ONLY publication dedicated to covering
commercial AI (also DoD and Fifth Generation coverage).  Perhaps they
aren't aware of the AI Report from AI Publications (95 first st., Los
Altos, CA  94022), or of the Georgia Tech (?) newsletter described in
AIList about six months ago.  I've also heard recently of an "AI and
its Applications" newsletter, but have no details.

The flier does mention levels of AI investment by U.S. companies, and
claims that the current AI market of $125 million (36% software, 12%
intelligent robots, 52% LISP workstations) will expand to $4,440 million
by 1990: 43% software (7% LISP, 13% expert system tools, 5% natural
language, 8% programming languages, 8% military, 2% other), 15%
intelligent robots, 28% LISP workstations, 11% other processors, and
3% AI communications.

                                        -- Ken Laws

------------------------------

Date: Wed 7 Nov 84 15:32:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Machine Intelligence News Digest

From the November issue of IEEE Spectrum, p. 123:

Yet another newsletter covering the field of artificial intelligence
has been announced, but this time it comes from the United Kingdom.
Machine Intelligence News Digest is the first British news publication
to monitor artificial intelligence on a monthly basis.  It will
concentrate on the existing and potential applications of AI and
their impact on the industrial and commercial world.  It will also
include a calendar of events and a publication review section.
Regular coverage will be given to artificial vision and speech
recognition, AI languages such as LISP, integrating intelligent
machines with computer-aided systems, and AI research programs.
These include the DARPA effort in the United States, the fifth-
generation project in Japan, and the Esprit program in France.

The monthly newsletter costs 110 pounds annually ($140).  Subscription
information is available by writing the publisher, Oyez Scientific
and Technical Services Ltd., Bath House, 3rd Fl., 56 Holborn Viaduct,
London EC1A 2EX, England; or calling 01-236-4080.

------------------------------

Date: 7 Nov 84 14:06:29 EST
From: BIESEL@RUTGERS.ARPA
Subject: Taxonomy of malgorithms.

Now that the concept of malgorithms has been defined it behooves us as
serious scientists to classify the different kinds of malgorithms, to
write learned papers in obscure journals, and to generally do everything
to bring scholarly respectability to this heretofore underrecognized area
of computer science. The following is a modest contribution to the
establishment of a taxonomy of malgorithms.

The notion of an optimal algorithm is an old one, and the definition of
of optimality in time, say, or in storage is straightforward. The little
"o" and the big "O" notation is well established and suffices to define
the complexity of an algorithm (except for a constant or two), and thus
permits the comparison of two algorithms for the same problem. The optimal
algorithm is therefore simply that algorithm which has the lowest
time complexity for any given problem. Often it is possible to prove
mathematically that the best possible algorithm for a given class of problems
cannot do better than some lower bound.

The converse of this, the worst possible algorithm, is not as easily defined.
Is the worst possible algorithm one that never finishes, while wiping
out every piece of storage and tying up your computer until you unplug
it? Or, more insidious, does this algorithm appear to run normally,
generate recognizable output, but produce results that are subtly wrong,
so wrong as to cause maximum damage when the results are used?

If we restrict our considerations only to those algorithms that
actually produce the correct result, but do so in the longest possible
time, we run into other problems. The concept of 'longest possible time'
is ill-defined, since we do not know the temporal extent of the
universe. Neglecting for the moment the relatively trivial problem
of how to keep a computer running forever ( a hardware problem, and
therefore not worthy of our consideration), we still need to
define some upper bounds on the time intervals we are considering.

Assumption 1: The universe will exist forever.
Definition 1: Any algorithm that runs forever before it produces the
correct result is a member of the class "Aleph Zero". Extensions
to algorithms that take longer than this are made in the obvious way
(i.e. classes aleph one etc.). The development of such an algorithm
is left as an exercise to the reader.

Assumption 2: The universe will exist until some terminal
climactic event.
Definition 2: Any algorithm that runs a finite amount of time,
and produces its output at the last moment of existence, is a
member of the class "Gabriel". (members of non-christian
religions may wish to substitute a climactic event of their own
choice).

While the classes thus far defined would appear to specify
theoretical upper bounds for malgorithm execution times, some
practitioners may be concerned with malgorithms that take into
account the limitations of present hardware configurations. While
this kind of pandering to mechanical strictures is abhorrent
to every theoretician, some precedents exist in the literature,
and we will accordingly briefly touch upon the subject here.

Suppose we have devised a malgorithm which can run an arbitrary
amount of time before producing its result. The task now becomes
one of maximizing this time, subject to the constraints formed
by the finite MTBF of the hardware, and the equally finite tolerance
threshold of the person waiting for the result.

Definition 3: Any malgorithm which produces its output at the last
possible instant before either the hardware fails, or the user
terminates the program is a member of class "Epsilon".

As an aside, malgorithms of this class will usually require some
additions to the operating system to recognize an attempt to
cancel the program execution. Hardware modifications, in the
form of energy storage systems to permit the program to
print its output after the frustrated user has pulled the power
plug, will probably also be necessary.

It should be noted that malgorithms of class "Epsilon" have
an unfortunate flaw: since they produce output whenever they are
terminated by the user, they are also the fastest possible
algorithms for any problem, being limited only by the speed with
which the user can pull the plug. Once malgorithms of this class
have become established, future work in computational speedup
will likely focus on fast switches for power cutoff.

Now that we have defined some upper bounds on theoretical
malgorithm performance, we would like to define some additional
classes of actual malgorithms, primarily for taxonomic purposes.

The classes below are only a beginning, and the reader is invited
to contribute additional definitions and examples to the discussion.
The classes are not maximal or minimal in any sense, but merely define
some categories of malgorithms. Example malgorithms should be easily
recognized as falling into one or another of the classes defined.

Definition 4: Malgorithms which employ recursion to solve a problem
for which there exists a closed form solution are members of class
"Fibonacci".

Definition 5: Malgorithms which solve a problem by exhaustive generation
of all permutations, when there is any alternative solution, are
members of class "Salesman".

Definition 6: Malgorithms which apply a general algorithm to the wrong
size problem are members of class "Heapsort".
Example: Heapsort applied to the list 1,3,2.

Definition 7: Malgorithms for Monte-Carlo solutions to analytic
functions are members of class "Pi".

Definition 8: Malgorithms which provide a solution to a problem by
solving a more complex isomorphic problem are members of the class
"Gauss".
Example: Multiplication of two numbers by adding their logarithms.

Definition 9: Malgorithms which perform redundant computations
are members of class "Sheep".
Example: Determining the number of sheep in a herd by counting
the number of legs and dividing by four.

It should be noted that the classes proposed here are neither
exhaustive, nor are they mutually exclusive. Most current
programs contain algorithms which upon inspection are really
malgorithms that fall into one or more of the classes here
defined. It is our devout hope that this short note will lead to
a more intensive investigation of this much neglected area of
computer science. The author is convinced that this area
can provide subject matter for several Ph.D. dissertations
at the more mathematically rigorous institutions of higher
learning, and wishes to express his gratitude to the contributors
to the Ailist, who have given the impetus for this important work.


Biesel@Rutgers.ARPA

------------------------------

Date: Thu, 8 Nov 84 11:30:22 cst
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: data structures + malgorithms =

It is clear that there are many more malgorithms than problems, if for no
other reason than we all have more solutions than problems.  The real
science of malgorithms is to find really useful applications of good
malgorithms beyond the trivial ones of classroom and textbook examples.

To my surprise there really are such uses, or there is at least one.
The other day I was talking with an attorney about copyrighting programs.
He says that in copyright cases in which there is some question of
authorship, judges are often impressed by "fingerprints" embedded in
software.  The usual kind of fingerprint is a copyright notice
buried in block 0 of an index file, variable names which form a code
for the author's name, etc.  But he says the most effective fingerprints
are sections of code so poorly designed and written that their inclusion
in the software must have been intentional, since nobody would be stupid
enough to use such sloppy techniques in their normal practice.
In court, to prove you wrote the program, you simply point out the bad parts
to the judge and claim that, since you are an expert, the only way that code
could have gotten there was by your intentionally inserting a fingerprint.

A nice side effect of this technique is that we now have a good excuse to
give to grad students and others who discover malgorithms in our programs.  We
simply say that we are preparing to protect our copyright.

So here we have the birth of a new discipline.  Not only do we have the
design and analysis of malgorithms; we now have applications of malgorithms
as well.  The question is, are there any other applications?

------------------------------

Date: 3 Nov 1984 1201-PST
From: Rob-Kling <Kling%UCI-20B@UCI-750a>
Subject: Program Description - Social Impacts of Computing, UC-Irvine

                                CORPS

                        Graduate Education in
            Computing, Organizations, Policy, and Society
               at the University of California, Irvine

     This graduate concentration at the University of California,
Irvine provides an opportunity for scholars and students to
investigate the social dimensions of computerization in a setting
which supports reflective and sustained inquiry.

     The primary educational opportunities are PhD concentrations in
the Department of Information and Computer Science (ICS) and MS and
PhD concentrations in the Graduate School of Management (GSM).
Students in each concentration can specialize in studying the social
dimensions of computing.

     The faculty at Irvine have been active in this area, with many
interdisciplinary projects, since the early 1970's.  The faculty and
students in the CORPS have approached them with methods drawn from the
social sciences.

     The CORPS concentration focuses upon four related areas of
inquiry:

 1.  Examining the social consequences of different kinds of
     computerization on social life in organizations and in the larger
     society.

 2.  Examining the social dimensions of the work and organizational
     worlds in which computer technologies are developed, marketed,
     disseminated, deployed, and sustained.

 3.  Evaluating the effectiveness of strategies for managing the
     deployment and use of computer-based technologies.

 4.  Evaluating and proposing public policies which facilitate the
     development and use of computing in pro-social ways.


     Studies of these questions have focussed on complex information
systems, computer-based modelling, decision-support systems, the
myriad forms of office automation, electronic funds transfer systems,
expert systems, instructional computing, personal computers, automated
command and control systems, and computing at home.  The questions
vary from study to study.  They have included questions about the
effectiveness of these technologies, effective ways to manage them,
the social choices that they open or close off, the kind of social and
cultural life that develops around them, their political consequences,
and their social carrying costs.

     CORPS studies at Irvine have a distinctive orientation -

(i) in focussing on both public and private sectors,

(ii) in examining computerization in public life as well as within
      organizations,

(iii) by examining advanced and common computer-based technologies "in
      vivo" in ordinary settings, and

(iv) by employing analytical methods drawn from the social sciences.



         Organizational Arrangements and Admissions for CORPS


     The CORPS concentration is a special track within the normal
graduate degree programs of ICS and GSM.  Admission requirements for
this concentration are the same as for students who apply for a PhD in
ICS or an MS or PhD in GSM.  Students with varying backgrounds are
encouraged to apply for the PhD programs if they show strong research
promise.

     The seven primary faculty in the CORPS concentration hold
appointments in the Department of Information and Computer Science and
the Graduate School of Management.  Additional faculty in the School
of Social Sciences, and the program on Social Ecology, have
collaborated in research or have taught key courses for CORPS
students.  Research is administered through an interdisciplinary
research institute at UCI which is part of the Graduate Division, the
Public Policy Research Organization.

Students who wish additional information about the CORPS concentration
should write to:

          Professor Rob Kling (Kling@uci)
          Department of Information and Computer Science
          University of California, Irvine
          Irvine, Ca. 92717
          714-856-5955 or 856-7403

                                or to:

          Professor Kenneth Kraemer (Kraemer@uci)
          Graduate School of Management
          University of California, Irvine
          Irvine, Ca. 92717
          714-856-5246

------------------------------

End of AIList Digest
********************

∂11-Nov-84  0004	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #152    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Nov 84  00:03:59 PST
Date: Sat 10 Nov 1984 22:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #152
To: AIList@SRI-AI


AIList Digest            Sunday, 11 Nov 1984      Volume 2 : Issue 152

Today's Topics:
  Msc. - Band Name,
  Machine Translation - Aymara as Interlingua,
  Linguistics - Sastric Sanskrit & Language Degeneration,
  Knowledge Representation - Problem Solving Representations,
  Seminars - Rule-Based Debugging System & PROLOG Data Dependency Analysis,
  Conference - IJCAI-85
----------------------------------------------------------------------

Date: 10 Nov 84 18:53:18 PST (Saturday)
From: Mark Sabiers <Sabiers.es@XEROX.ARPA>
Reply-to: Sabiers.es@XEROX.ARPA
Subject: The names of bands

The enclosed message came through net.music (uucp) and Info-Music
(ARPA).  Thought it was appropriate to this list.

Mark


  Subject: Artificial Intelligence
  Date: 8 Nov 84 04:46:12 GMT
  Organization: AT&T Bell Labs, Holmdel NJ
  From: "N.BRISTOL" <bristol@hou2h.uucp.ARPA>

  Has anyone heard of a band called
  Artificial Intelligence?  I heard a
  tune on the radio and I would like to know more
  about the band.
  RSVP by mail or the net, I don't care.

  Gil Bristol
  AT&T Consumer Products
  Neptune, NJ
  hou2h!bristol

------------------------------

Date: Fri Nov  9 1984 13:22:59
From: Yigal Arens <arens%usc-cse.csnet@csnet-relay.arpa>
Subject: Strange new languages


As if we didn't have enough trouble with Sastric Sanskrit, last Wednesday's
LA Times contains a story about wonderful advances in machine translation
using an Indian language (Aymara) which "according to some historians [was
constructed by] wise men from scratch, by logical, premeditated design, as
early as 4,000 years ago."  How "some historians" know this remains a
mystery, all the more so since according to the article there are hardly any
written records of the langauge.

Anyway, a Bolivian mathematician, Ivan Guzman de Rojas, has devised a system
for machine translation using this language as a "bridge".

        "Sitting at a computer terminal, Guzman de Rojas demonstrates by
         typing a tricky Spanish sentence: `La mujer que vino ayer tomo
         vino.'  Less than a second after he pushes a button, five
         translations flash on the screen and roll off a printer.  The
         English reads: `the woman who came yesterday drank wine.'

        "The system is remarkable, according to US and Canadian experts, not
         only for its speed and versatility, but its ability to sort out
         ambiguities.  Other systems, they say, cannot distinguish between
         uses of the word `vino' - which can mean `came' or `wine' - without
         an awkward modification of the computer logic."

The article is full of inaccuracies concerning machine translation.

It claims that Wang has recently given Guzman de Rojas $50,000 plus a
$100,000 computer "to refine his system."

Anybody know more about this?

Yigal Arens
USC

------------------------------

Date: 8 Nov 1984 11:59-PST (Thursday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Sastric Sanskrit & Language Degeneration


        By "has begun", I meant since the mid-nineteenth century.
Since the time frame I have been writing in is measured by millenia,
one century qualifies for "has begun".
        Anyway, I wonder what Bill Poser means by saying:
"But that does not mean that the *language* degenerates--only that
its use degenerates."  If a language is abused to a large extent by
its speakers, has it not degenerated?  What seems to be implied
here is that there is some abstract "language" prototype which
exists independent of use.  If this is so, violations to this prototype
are degeneration.  This is exactly the point of view of Panini etc.
The Indian and Greek cultures considered language to be a primary
component of culture(in the Indian case, language IS culture: the word
Aryan originally meant one who spoke Aryan language(i.e. Sanskrit)).
To illustrate what I mean by degeneration, consider a group of
primitives who begin to use language.  They begin with primitive
grunts to signify essential notions such as "food".  Later, they find
that the machinery of the language does not allow the expression of
concepts.  Thus the langauge evolves and evolves.  The ultimate
evolution is reached  when a language can express all notions in the
realms of the physical, emotional, conceptual, and spiritual in
a concise unambiguous way.  Sastric Sanskrit may indeed be that
language(or close to it).  Now the less lofty of the population
find no need to use such words as "none other than", "agreeing with
no other", "activity conducive towards existence" etc. (these are words
in Sastric Sanskrit).  So they cease to use the complex machinery
and revert to simple formatrions to express what they need to.
If there is no prescription, or encouragement in the educational
process to stick to the higher form of the language, the more popular
masses(consider television) will produce a pressure on the less numerous
scholarly class, and the language will begin to revert backwards.
This is exactly what happened to Sanskrit.  The "Prakrits" and "Apabrahmshas"
eventually turned Sanskrit into Hindi, Bengali etc., which do not
have the sophisticated machinery Sastric Sanskrit had.  In other words,
where one word in the Sastra signified a concept, an entire sentence
is now needed in the degenerated form of the language.  I believe this
is also the pattern which Proto-Indo-European followed, and which
English is following now.

        Once again, Sastric Sanskrit is a natural language.  But what
exactly is a natural language?  Is it existence of native speakers
(as Bill Poser suggests), or is it something about the nature of te
language itself?  Whether consciously or not, Linguists and NLP
people think of natural languages as necessarily being ambiguous
and very different from the predicate calculus.  What the existence of
the Sastra indicates is that the definition of natural language
should be changed.  I would say that a natural language is one which
1) is used
2) which has the ability to express naturally, all the various aspects
of the natural world.
Thus, if Esperanto were used in a culture, it would be a natural
language. Mathematics cannot naturally express poetic notions, it
is defined over only a small aspect of the natural world.  Sastric
Sanskrit(so I have been told by Sanskrit experts) had(and may still
have) native speakers. It is also capable of expressing anything any
other natural language can express.  You can write philosophy or
poetry in the Sastra.  I challenge anybody to find a sentence in
any language which cannot be expressed using the machinery of
Sastric Sanskrit.
        I think the real point is that the Sastra is a bridge
between the natural and artificial and challenges common notions
of what the boundary is.  One conclusion I would make is that
it is possible for a child to be raised speaking totally
unambiguously from birth and never suffer from lack of expression
or cumbersomeness.  As an interlingua, Sastra would be great
because it can codify with exactitude and make inferences naturally,
and yet poetic notions can be coded and not lost on the target
language.

Rick

------------------------------

Date: Fri 9 Nov 84 07:41:30-CST
From: Aaron Temin <CS.Temin@UTEXAS-20.ARPA>
Subject: convenient problem solving representations

There was a conference on knowledge representation and languages at the
Applied Physics Lab of Johns Hopkins from Oct 29-31.  One of the main
issues was that current programming languages force one to use
primitives that map well to a machine, but badly to most problem
domains.  Thus there are two problems: What primitives are appropriate
for a given problem domain and how can one map those into an executable
module on a given machine?

Jean Sammet from IBM contended that many problem-domain specific
languages already exist, but obviously there aren't enough or everyone
would be pretty content by now.  What is seems we need are guidelines to
help with these questions.

These are questions for all computer scientists, but especially those of
us in AI who have spent time developing new knowledge representations
rather than implementing old ones.

-Aaron

------------------------------

Date: 8 November 1984 1227-EST
From: Staci Quackenbush@CMU-CS-A
Subject: Seminar - Rule-Based Debugging System

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

        Name:   Bernd Bruegge
        Date:   November 12, 1984
        Time:   3:30 - 4:30
        Place:  WeH 5409
        Title:  "PATH RULES:  Debugging as a Rule-Based Activity


Debugging has often been considered an ad hoc technique with no underlying
model for the user. In this talk we show how debugging can be viewed as a
rule based activity. Rule based systems have been used extensively in the
area of artifical intelligence. We demonstrate that they can be quite useful
in the area of debugging.

We have designed and implemented a language called PATH RULES. Several
examples of PATH RULES on the implementer as well as on the user level are
given: We show how rules can be used in the design of the command language,
the implementation of debugging mechanisms (breakpoints, tracing, etc),
screen layout, dialog control and multiple process debugging problems.
PATH RULES have been used in the implementation of the Interim Spice
Debugger KRAUT. KRAUT is a remote, source oriented debugger for Pascal
running under the Accent Operating system and is currently being modified
for Ada.

------------------------------

Date: Fri, 9 Nov 84 09:32:17 pst
From: (Julia D. Snyder [csam]) julia@lbl-csam
Subject: Seminar - PROLOG Static Data Dependency Analysis

        [Forwarded from the LBL distribution by Laws@SRI-AI.]


       High Performance Execution of PROLOG Programs
         Based on a Static Data Dependency Analysis
                             by
                     Jung-Herng Chang*
                    (UCB Aquarius Group)

                 Room:  Bldg. 50B Rm. 4205
                  Date:  November 12, 1984
               Time:  10:30 a.m. - 12:00 p.m.

Outline
     What is PROLOG ?  Why is it an important symbolic manipulation
     language? The Performance of executing PROLOG programs has been
     improved by going from interpreters to compilers, and then to
     special hardware (e.g. the PLM Machine at UCB).  What is the next
     step to improve performance?  This talk begins with an introduction
     to PROLOG, followed by a discussion of more advanced topics in PROLOG.
     A methodology for a static data dependency analysis for PROLOG is
     introduced, as well as its applications to the PLM Machine and a
     parallel execution environment.

*The speaker is also affiliated with ACAL LBL.

------------------------------

Date: Fri 9 Nov 84 08:49:27-PST
From: name AAAI-OFFICE <AAAI@SRI-AI.ARPA>
Subject: IJCAI-85 Call


                                IJCAI-85
                             CALL FOR PAPERS

The IJCAI conferences are the main forum for the presentation of Artificial
Intelligence research to an international audience.  The goal of the IJCAI-85
is to promote scientific interchange, within and between all subfields of AI,
among researchers from all over the world.  The conference is sponsored by the
International Joint Conferences on Artificial Intelligence (IJCAI), Inc., and
co-sponsored by the American Association for Artificial Intelligence (AAAI).
IJCAI-85 will be held at the University of California, Los Angeles from
August 18 through August 24, 1985.

        * Tutorials: August 18-19; Technical Sessions: August 20-24

TOPICS OF INTEREST

Authors are invited to submit papers of substantial, original, and previously
unreported research in any aspect of AI, including:

* AI architectures and languages
* AI and education (including intelligent CAI)
* Automated reasoning (including theorem proving, automatic programming,plan-
  ning, search, problem solving, commensense, and qualitative reasoning)
* Cognitive modelling
* Expert systems
* Knowledge representation
* Learning and knowledge acquisition
* Logic programming
* Natural language (including speech)
* Perception (including visual, auditory, tactile)
* Philosophical foundations
* Robotics
* Social, economic and legal implications


REQUIREMENTS FOR SUBMISSION

Authors should submit 4 complete copies of their paper.  (Hard copy only, no
electronic submissions.)

        * LONG PAPERS: 5500 words maximum, up to 7 proceedings pages
        * SHORT PAPERS: 2200 words maximum, up to 3 proceedings pages

Each paper will be stringently reviewed by experts in the topic area specified.
Acceptance will be based on originality and significance of the reported
research, as well as the quality of its presentation.  Applications clearly
demonstrating the power of established techniques, as well as thoughtful
critiques of previously published material will be considered, provided that
they point the way to new research and are substantive scientific contributions
in their own right.

Short papers are a forum for the presentation of succinct, crisp results.
They are not a safety net for long paper rejections.

In order to ensure appropriate refereeing, authors are requested to
specify in which of the above topic areas the paper belongs, as well
as a set of no more than 5 keywords for further classification within
that topic area.  Because of time constraints, papers requiring major
revisions cannot be accepted.

DETAILS FOR SUBMISSION

The following information must be included with each paper:

        * Author's name, address, telephone number and net address
          (if applicable);
        * Topic area (plus a set of no more than 5 keywords for
          further classification within the topic area.);
        * An abstract of 100-200 words;
        * Paper length (in words).

The time table is as follows:

        * Submission deadline: 7 January 1985 (papers received after
          January 7th will be returned unopened)
        * Notification of Acceptance: 16 March 1985
        * Camera Ready copy due: 16 April 1985

Contact Points

Submissions should be sent to the Program Chair:

        Aravind Joshi
        Dept of Computer and Information Science
        University of Pennsylvania
        Philadelphia, PA 19104 USA

General inquiries should be directed to the General Chair:

        Alan Mackworth
        Dept of Computer Science
        University of British Columbia
        Vancouver, BC, Canada V6T 1W5

Inquiries about program demonstrations (including videotape system
demonstrations) and other local arrangements should be sent to
the Local Arrangements Chair:

        Steve Crocker
        The Aerospace Corporation
        P.O. Box 92957
        Los Angeles, CA 90009 USA

Inquiries about tutorials, exhibits, and registration should be
sent to the AAAI Office:

        Claudia Mazzetti
        American Association for Artificial Intelligence
        445 Burgess Drive
        Menlo Park, CA 94025 USA

------------------------------

End of AIList Digest
********************

∂11-Nov-84  2334	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #153    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Nov 84  23:33:11 PST
Date: Sun 11 Nov 1984 21:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #153
To: AIList@SRI-AI


AIList Digest            Monday, 12 Nov 1984      Volume 2 : Issue 153

Today's Topics:
  Linguistics - Language Degeneration,
  Algorithms - Malgorithms,
  Project Report - IU via Dialectical Image Processing,
  Seminars - Spatial Representation in Rats &
    Human Memory Capacity & Design of Computer Tools &
    Artificial Intelligence and Real Life
----------------------------------------------------------------------

Date: Sun, 11 Nov 84 14:54:08 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Language Degeneration

I'd like to make a few comments on Briggs' statements on language
degeneration and Sanskrit, English, etc. The idea that a language
degenerates stems from the 19th century biological metaphor which
has been refuted for at least 100 years. Language is not alive; people
are. We in linguistics know that "language death" is a metaphor and
has almost nothing to do with the language as a system; it has everything
to do with the sociocultural conditions on speaking and the propagation
of cultural groups.

How can it be reasonably said that a language degenerates if it is
abused? What do you mean by "abused"??? If "abused" means "speaking in
short sentences," then everyone "abuses" the language, even the most
ardent pedants. Language indeed changes, but it does not degrade.

Briggs says that violations of the prototype are degenerations. This
is true by definition only. And this definition can be accepted only
if one also adheres to a Platonic notion of language history, wherein
the pure metaphysical Ursprache is degraded by impure shadowy manifestations
in the real world. Maybe Briggs is a Platonist, but then he's not saying
anything about the real world.

Popular use does NOT imply a reversion or "reversal" of progress
in language change. There is no progress in language change: a change
in one part of the system over time which complicates the system
generally causes a simplification in another part of the system.
So, Hoenigswald said that languages maintain about 50% redundancy
over time.

What is the "sophisticated machinery" Briggs talks about? I suspect that
he means that he thinks that language which have a lot of morphology
and are synthetic are somehow "more sophisticated" than "our poor
unfortunate English," which is analytic and generally free of
morpho-syntactic markings. Honestly, the idea that a synthetic language
is "better" than a degraded analytic English is another remnant of
the 19th century (where neo-Platonism also reigned).

The evolution of analytic languages from synthetic versions (i.e.,
pure to degraded) is not only charged with moral claims, but it is also
wrong.

1. Finnish has retained its numerous case markings over time, as has
Hungarian.

2. Colloquial Russian has begun to add case markings (instrumental in
the predicate nominative).

3. English is losing overt marking of the subjunctive: are we therefore
less able to express subjunctive ideas? Is English becoming (GOOD GOD!)
non-subjunctive, non-hypothetical....

If Briggs is right, then he himself is contributing to the degradation
by his very speech to his friends. (I, of course, don't believe this.)

Finally, if Briggs is right about the characteristics of natural language,
then any natural language can be a bridge, not necessarily Sanskrit. And
this claim is tantamount to saying only that translation is possible.

Bill Frawley

Address: 20568.ccvax1@udel

------------------------------

Date: 11 Nov 1984 18:02:20 EST
From: MERDAN@USC-ISI.ARPA
Subject: balgorithms

Here are couple balgorithms that I encountered on a single microprocessor
project.  Neither of these balgorithms appeared the slightest bit bad to
their authors, and one of them was insulted when I pointed out how bad
his approach really was.

Balgorithm #1

Problem
  Perform error correction for a 15,11 Hamming code on an 8 bit micro
  (Intel 8008).

Original solution
  Implement a feedback shift register with error trapping logic as with
  a BCH code.  Approximately 600 bytes of tricky code was required.

Better solution
  Use the classic error dectection matrix method.  I believe about 100
  bytes of obvious code was required.

Balrgorithm #2

Problem
  Calculate horizontal and vertical parity for a sequence of 5 bit char-
  acters and tack them on at end of the sequence.

Original solution
  Pick up each character and count the number of 1s by masking out each
  bit with a separate mask, packing the resultant bit into a 5 bit word
  on the fly. About 1500 bytes of very buggy code resulted.

Better solution
  Treat the sequence in blocks of 5 characters.  For each block prestore
  a pattern assuming that parity is even.  Pick up each character, determine
  its parity (the load did this on the 8008), and clear the pattern for that
  character.  Or the patterns together, producing the result.  About 150
  bytes of mostly straight line code resulted.

Even better solution
  Don't calculate parity in software but let the UART hardware generate
  and check the parity.

Comment
  In both cases I feel that the justification for the original solution
  was that programmer wanted to do some tricky coding just to prove that
  he could do it rather than understand the problem first.  This tendency
  does not seem to be going away as fast as we all would like it.


Thanks

Terry Arnold

------------------------------

Date: 13 Oct 1984 11:40-EDT
From: ISAACSON@USC-ISI.ARPA
Subject: Project Report - IU via Dialectical Image Processing (DIP)

             [Forwarded from Vision-List by Laws@SRI-AI.]


Just read the summary of DARPA IU effort which I find very interesting.
By coincidence, we submitted this week to DARPA a summary of our current
efforts in "Dialectical Pattern Processing".  Although phrased in
broader terms, much of this effort is also directed toward IU.  We
enclose a copy of the report in the possible interest of the vision-list
readership.  -- JDI

10/7/84
                  DARPA Research Summary Report
             I M I  Corporation, St. Louis, Missouri
         Project Title:  Dialectical Pattern Processing


Overview.    Earlier  work  [1] has demonstrated  unusual  low-level
intelligence features in dialectical processing of string  patterns.
This  effort  extends  dialectical processing to  2-D  arrays,  with
applications in machine-vision.   I M I  Corporation is an innovator
in Dialectical Image Processing (DIP),  a new subfield in very  low-
level vision (VLLV) research.   Dialectics is an elusive doctrine of
philosophy and (non-standard) logic that can be traced from Plato to
Hegel  and beyond,  but that has never lent itself to be grounded in
precise  formalisms or in computing machines.   Certain  influential
philosophies  hold  that  the   universe  operates  in  accord  with
dialectical  processes,  culminating  in  the  activity  of  thought
processes.   This  effort builds on the fact that [1] discloses  the
first and only machine implementation of dialectical processes.

Objectives.     A broad long-term objective is to test  a   working-
hypothesis  that  states that dialectical processes are  fundamental
ingredients,   in  addition  to  certain  others,  in  autonomically
emergent intelligences. Intelligences that bootstrap themselves in a
bottom-up   fashion  fall  into  this  category.    More   immediate
objectives  are  (1) to demonstrate the technical feasibility  of  a
small  number  of VLSI chips to host a dialectical image  processor,
and (2) to evaluate the type of intelligence inherent in networks of
dialectical processors, with emphasis on learning.

Approach.   A  mix  of  activities includes software  simulation  of
dialectical  networks  for  image  processing;  VLSI-based  hardware
design  for  dialectical  image processors;  and assessment  of  the
learning capabilities inherent in the above-mentioned systems.

Current  Status & Future Plans.     Consideration of the possibility
of  dialectical  processing began in the  early  Sixties.   By  now,
theoretical  foundations have been laid and  dialectical  processing
has been amply demonstrated in strings and in 2-D arrays (see Fig. 1
&  Fig.  2 below) to the point where it appears to support a  viable
new  computer-vision technology.   Feasibility studies in the design
of  VLSI-based DIPs have shown that reasonably large  DIPs  (100x100
pixels)  will fit into a single card and can be readily implemented,
at least for experimentation.   Scant   resources limit the scope of
some and preclude others of the activities listed below,  which  are
considered important to the advancement of this technology.

*   Run software simulations of DIP on better equipment (e.g.,  Lisp
machine or BION workstation) and attempt to extend effort to 3-D.

*  Implement in VLSI hardware a prototype of a moderate size DIP.

*  Attempt to specialize other vast parallel networks (e.g., Hillis'
Connection   Machine  [2]  or  Fahlman's  Boltzmann  Machine)   into
dialectical image processors.

*   Specialize  a  network  of  dialectical  processors  to  support
low-level machine learning by analogy and metaphor.


            Fig. 1 - DIP Analysis of a Plane Silhouette
                 [Graphics will be sent by US Mail]

   Fig. 2 - Selected Steps from DIP Analysis of a Tank Silhouette
                 [Graphics will be sent by US Mail]

Resources and Participants.   Available resources are limited.   The
list  of participants includes:  Joel D.  Isaacson,  PhD,  Principal
Investigator;   Eliezer Pasternak,  MSEE,  Project Engineer;   Steve
Mueller,  BS/CS,  Programmer;  Ashok Jain, MS/CS, Research Assistant
(SIU-E).


Products,  Demonstrable Results,  Contact Points.   Certain products
and  results are proprietary and included in patent applications  in
progress.   Software  simulation of DIP can be readily demonstrated.
A  version  written  in Pascal for the IBM  PC/XT  is  available  on
request.    Point  of  contact:   Dr.  Joel  D.  Isaacson,   I  M  I
Corporation,  20 Crestwood Drive,  St. Louis, Missouri 63105, Phone:
(314) 727-2207, (ISAACSON@USC-ISI.ARPA).


References

[1]  Isaacson, J. D., "Autonomic String-Manipulation System,"  U. S.
Patent No. 4,286,330, August 25, 1981.

[2]  Hillis,  W.  D.,  "The Connection Machine," Report AIM-646, The
Artificial Intelligence Laboratory, MIT, Sept. 1981.

Acknowledgements

Supported  by the Defense Advanced Research Projects Agency  of  the
Department of Defense under ONR Contract No.  N00014-82-C-0303.  The
P.I.   gratefully  acknowledges additional support and encouragement
received   from  the  Department  of  Mathematics,  Statistics,  and
Computer  Science,  Southern Illinois University at Edwardsville.

------------------------------

Date: Thu, 8 Nov 84 13:23:13 pst
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Spatial Representation in Rats

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, November 13, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        C.  R.  Gallistel,  Psychology   Department,
                University  of  Pennsylvania;   Center for
                Advanced Study in the Behavioral Sciences

TITLE:          ``The rat's representation  of  navigational
                space:   Evidence  for  a  purely  geometric
                module''

ABSTRACT:       When the rat is shown the location of hidden
                food  and  must subsequently find that loca-
                tion, it  relies  strongly  upon  a  spatial
                representation  that  preserves  the  metric
                properties of the enclosure (the large scale
                shape   of  the  environment)  but  not  the
                nongeometric characteristics  (color,  lumi-
                nosity, texture, smell) of the surfaces that
                define the space.  As a result,  the  animal
                makes   many  ``rotational''  errors  in  an
                environment that has a rotational  symmetry,
                looking in the place where the food would be
                if the environment  were  rotated  into  the
                symmetrically  interchangeable position.  It
                does   this   even   when   highly   salient
                nongeometric   properties  of  the  surfaces
                should enable it to avoid these costly rota-
                tional  errors.   Evidence is presented that
                the   rat   notes   and   remembers    these
                nongeometric properties and can use them for
                some purposes, but cannot directly use  them
                to   establish  positions  in  a  remembered
                space, even when it would be highly advanta-
                geous  to  do so.  Thus, the rat's position-
                determining system appears to be an encapsu-
                lated  module  in  the Fodorian sense.  Con-
                siderations of possible  computational  rou-
                tines  used to align the currently perceived
                environment  with  the  animal's  map  (it's
                record   of   the   previously   experienced
                environment) suggest reasons why this  might
                be  so.  Old evidence on the finding of hid-
                den food by chimpanzees suggests  that  they
                rely on a similar module.  This leads to the
                conjecture that the module is  universal  in
                higher vertebrates.

------------------------------

Date: Thu, 8 Nov 84 22:51:11 pst
From: Misha Pavel <mis@SU-PSYCH>
Subject: Seminars - Human Memory Capacity & Design of Computer Tools

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


*************************************************************************
                   Two talks by T.K. Landauer:
*************************************************************************

       Some attempts to estimate the functional capacity
                   of human long term memory.

                         T. K. Landauer
               Bell Communications Research, N.J.

Time:  Wednesday, November 14, 1984 at 3:45 pm
Place: Jordan Hall, Building 420, Room 050

How much useful (i.e retrievable)  information  does  a  person's
memory  contain? Not only a curiosity, even an approximate answer
would be useful in guiding theory about underlying mechanisms and
the  design of artificial minds. By considering observed rates at
which knowledge is added to and lost from long-term  memory,  and
the information demands of adult cognition, several different es-
timates were obtained, most within a few orders of  magnitude  of
each other. Obtaining information measures from  performance data
required some novel models of recognition and recall memory  that
also will be described.

-------------------------------------------------------------------

  PSYCHOLOGICAL INVENTION: some examples of cognitive research
          applied to the design of new computer tools.

                         T. K. Landauer
               Bell Communications Research, N.J.

Time:  Friday, November 16, 1984 at 3:15 pm
Place: Jordan Hall, Building 420, Room 100

Computers offer the possibility of designing  powerful  tools  to
aid  people  in  cognitive  tasks. When psychological research is
able to determine the factors that currently limit how  well  hu-
mans   perform  a particular cognition-based activity, the design
of effective new computer aids sometimes follows directly. Illus-
trative  examples  will be described in information retrieval and
text-editing applications. In the former, insights leading to in-
vention  came from systematic observations of the actual linguis-
tic behavior of information-seekers, in the latter from  correla-
tions  of task performance with measured and observed differences
in individual characteristics.

------------------------------

Date: Fri, 09 Nov 84 16:25:11 EST
From: "Paul Levinson" <1303@NJIT-EIES.MAILNET>
Subject: Seminar - Artificial Intelligence and Real Life


     "Artificial Intelligence and Real Life"

     Abstract of talk to be given by Paul Levinson at the New School
for Social Research, November 12, 1984, 8 PM, 66W12th St., NYC.

     Part of the 1984-1985 Colloquium on Philosophy and Technology,
sponsored by the Polytechnic Institute of New York and the New School.


     Talk begins by distinguishing two types of "AI": "auxiliary" or
"augmentative" intelligence (as in mainframes extending and
augmentating the social epistemological enterprise of science, and
micros extending and augmenting thinking and communication on the
individual level), and "autonomous" intelligence, or claims that
computers/robots can or will function as self-operating entities, in
independence of humans after the initial human programming.  The
difference between these two types of AI is akin to the difference
between eyeglasses and eyes.

     Augmentative intelligence on the mainframe scientific level will
be assessed as reducing intractable immensities of data, or allowing
human cognition to process ever larger portions and systems of
information.  Just as the telescope equalizes human vision to the vast
distances of the universe, so computers on the cutting edges of
science make our mental capacities more equal to the vast numerosity
of data we encounter in the macro and micro universes.  The social and
psychological as well as cognitive consequences of micro computers and
the type of instant, intimate, intellectual and personal communication
they allow across distances will be compared to the Freudian
revolution at the turn of the century in its impact upon the human
psyche and the way we perceive ourselves.  Critics of these two types
of computers such as Weizenbaum will be seen as part of a long line of
naive and failed media critics beginning at least as far back as
Socrates, who denounced writing as a "misbegotten image of the spoken
original," certain to be destructive of the intellect (Phaedrus).

     "Expert systems" and "human meat machines" claims for autonomous
intelligence in machines will be examined and found wanting.
Alternative approaches such as Hofstadter's "bottom-up" ideas will be
discussed.  A conception of the evolution of existence in the natural
cosmos as progressing in a subsumptive way from non-living to living
to intelligent material will be introduced, and this model along with
Hofstadter-type critiques will lead to the following conclusion: the
problem with current attempts at autonomous intelligence is that the
machines in which they're situated are not alive, or do not have
enough of the characteristics necessary for the sustenance of the
"living" label.  Put otherwise, the conclusion will be: in order to
have artificial intelligence (the autononous kind), we first must have
artificial life; or: when we indeed have created artificial
intelligences which everyone agrees are truly intelligent and
autonomous, we'll look at these "machines" and say: My God (or
whatever)!  They're alive.

     Practical and moral problems that may arise from the creation of
machines that are more than metaphorically autonomous of their human
producers will be examined.  These machines will most likely be in the
form of robots, since robots can move in the world and interact with
environments in the direct ways characteristic of living organisms.

------------------------------

End of AIList Digest
********************

∂15-Nov-84  0022	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #154    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Nov 84  00:14:35 PST
Date: Wed 14 Nov 1984 22:39-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #154
To: AIList@SRI-AI


AIList Digest           Thursday, 15 Nov 1984     Volume 2 : Issue 154

Today's Topics:
  Pattern Recognition - Partial Matching,
  LISP - Lisp Mailing Lists? & Conversion Between Dialects,
  Conference - IJCAI-85,
  AI Tools - LM-Prolog and DCG's,
  Perception - Dialectics,
  Linguistics - Mel'cuk's Dictionary & Aymara & Language Evolution,
  Humor - Artificial Poetry,
  Seminars - Speech Acts
----------------------------------------------------------------------

Date: Wed, 7 Nov 84 08:20:48 cst
From: Mohd Nasiruddin <nasir%lsu.csnet@csnet-relay.arpa>
Subject: Partial Matching


     I am interested in finding information on the extent of
work done in partial matching.

     If anyone can point me towards research in  this  area,
or references, please respond as early as possible.

     Thanks in advance.


                    ---Mohd. Nasiruddin

                    Dept.  Of  Computer  Science,
                    Louisiana State University,
                    Baton Rouge, La 70893.

                    CSNET: <nasir%lsu@csnet-relay>

------------------------------

Date: Tue, 13 Nov 84 15:02:33 -0200
From: jaakov%wisdom.BITNET@Berkeley (Jacob Levy)
Subject: Is there a Lisp mailing lists?

        I know of franz-friends@berkeley. Is there some other list of people
who have Symbolics 3600, Maclisp, etc? Thanks for the info,

        Rusty Red (AKA Jacob Levy)

        BITNET:                         jaakov@wisdom
        CSNET and ARPA:                 jaakov%wisdom.bitnet@wiscvm.ARPA
        UUCP: (if all else fails..)     ..!decvax!humus!wisdom!jaakov

------------------------------

Date: Wed, 14 Nov 84 13:56 MST
From: May%pco@CISL-SERVICE-MULTICS.ARPA
Subject: Conversion Between Dialects of Lisp

I'm looking for tools to convert among the following Lisp dialects, with
the potential for going in any direction.  Any replies sent to me will
be published collectively.  Thanks.

 Maclisp, the Multics version
 Interlisp, the GCOS version from the U. of Waterloo
 Franz Lisp
 Common Lisp

Bob May

------------------------------

Date: Mon, 12 Nov 84 10:28 EST
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: IJCAI-85

←←←←←←←←←←←←←←←←←←←←←←←←← IJCAI-85 ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

The call for papers for IJCAI-85  has already been issued. The deadline
is January 7, 1985.  Please send your suggestions for invited speakers,
panels, or any other aspects concerning the technical program to:

     Aravind Joshi, Program Chair IJCAI-85
     Department of Computer and Information Science
     University of Pennsylvania
     Philadelphia, PA 19104
     USA

Clearly, it is impossible to accept all suggestions. However, your
suggestions are very essential and will be carefully considered by the
Program Committee.

------------------------------

Date: 12 Nov 84 11:06 PST
From: Kahn.pa@XEROX.ARPA
Subject: LM-Prolog and DCG's

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

In answer to John Burge's questions in V2 #33 [AIList V2 #136]:

My experiences using LM-Prolog have been very positive
but I am surely not an un-biased judge (being one of the
co-authors of the system).   (I am tempted to give a
little ad for LM-Prolog here, but will refrain.  Interested
parties can contact me directly.)

Regarding the Grammar Kit, the main thing that distinguishes
it from other DCGs is that it can continuously maintain a
parse tree.  The tree is drawn as parses are considered and
parts of it disappear upon backtracking.  I have found this
kind of dynamic graphic display very useful for explaining
Prolog and DCGs to people as well as debugging specific
grammars.

------------------------------

Date: Mon, 12 Nov 84 16:41:37 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Dialectics & Mel'cuk's Dictionary

Two things:

1. Isaacson has discussed dialectical image processing. There is a considerable
body of information on dialectical psychology and psycholinguistics which
may be of some help theoretically. The work by Klaus Riegel is seminal, as is
the work of the Soviets (esp. Vygotsky and cohorts). Though I know of no work
on vision using dialectical psychology, their work on the dialectics of
perception and cognition might be of use.

Also, the Soviets have made some attempts to develop dialectical logic: i.e.,
some form of a dialectical predicate calculus. I can't remember the references
for this, but I think I ran across it in the 1970 surveys of Soviet thought,
or perhaps in the Soviet studies series (Soviet Philosophy, Psychology, etc.
published by Sharpe). In any case, there have been attempts at formal
dialectic logic (though they may be ideologically charged), and these
studies may help in formalizing algorithms for low-level visual perception
in a dialectical model.

2. More generally for AI: there's a new dictionary out, written by I.A.
Mel'cuk and published by Montreal U. which is the richest formal/linguistic
representation I've seen of the encyclopedic structure of the lexicon.
It combines lexical collocation and a set of 53 relations to generate
the entire lexicon. It is very good for text-generation. But, it's in
French. Bonne chance, mes amis...

Bill Frawley

20568.ccvax1@udel

------------------------------

Date: Wed, 14 Nov 84 15:03 CST
From: "Brett D. Slocum" <Slocum.CSCDA@HI-MULTICS.ARPA>
Subject: Language translation

Ancient Purity and Polyglot Programs
London Sunday Times, November 4th, 1984
John Barnes

    Aymara, an old South American tongue used mainly by Andean peasants and
llama-herders,  has  enabled  a  Bolivian  mathematician to score a notable
first   in   the   increasing  application  of  the  computer  to  language
translation.   Using  it  as an intermediate language, Ivan Guzman de Rojas
has  written  the  first computer program capable of translating an English
text into several other languages simultaneously, rather than one at a time
as could already be done, at speeds of up to 120 words a minute.

    Aymara  is  spoken by 2.5 million people living around Lake Titicaca on
the  border  between  Bolivia  and  Peru.  There is no written form in use;
Aymara  speakers  who  can  write  do so in Spanish, the country's official
language.   Yet  Guzman  discovered  that  it is so logical and pure in its
syntax that it makes an ideal bridging language to a computer.

    Aymara  is rigorous and simple - which means that its syntactical rules
always  apply,  and  can  be written out concisely in the sort of algebraic
shorthand  that computers understand.  Indeed, such is its purity that some
historians think that it did not just evolve, like other languages, but was
actually  constructed  from  scratch  some  4,000 years ago.  It is also so
compact that a few words in it can do the work of dozens in English.

    Canadian  and  American  experts  believe  Guzman's  system is not only
versatile  in  the  range  of languages it can handle, but that it can also
sort  out  ambiguities  in  a  language  as it translates.  This is because
Aymara has a sense of logic that is very different from European languages.

    Guzman,  who  now  runs  a computer consultancy in the capital, La Paz,
says  that while he was teaching mathematics to Aymara children he realised
that  their  language  admitted  an intermediate value of truth or falsity.
That, he said, enabled them to reason about things that were uncertain in a
way  Europeans  could not.  He has spent the past five years developing his
translation program, which he calls Atamiri (the Aymara for interpreter).

    What is even more laudable about Guzman's achievement is that he did it
in  his  spare time, on borrowed computers, without any commercial backing,
in  one  of  the world's poorest countries.  His clients, he says, gave him
free time on their computers at night and over the weekend.

    Guzman  has already turned down the commercial overtures made by one US
computer  giant.  Not surprisingly, he has become a staunch defender of the
Aymara  language,  which is not taught in Bolivian schools and is generally
discouraged as a deadend peasant tongue.

    "It is a disgrace those things can happen on our planet," he says.  "If
I  ever  make  any  money  from  this, I will see that they get books and a
newspaper in their own language."

------------------------------

Date: 12 Nov 84 16:54:43 EST
From: Allen <Lutins@RU-BLUE.ARPA>
Subject: Language Evolution


         The ultimate evolution is reached  when a language can
        express all notions in the realms of the physical, emotional,
        conceptual, and spiritual in a concise unambiguous way.

The implication here is that there is a point where language *stops*
evolving. This is not the case.  One synchronic example of this diachronic
process is the presence of "dialects" within "languages" (I use these
terms cautiously, as defining them in a strict sense would be impossible
and unnecessary).  Although it might be argued that a written language may
stop evolving while the spoken language went on changing, I sincerely
doubt that this would be feasable.
                                        -Allen

------------------------------

Date: Mon 12 Nov 84 18:59:17-PST
From: BARNARD@SRI-AI.ARPA
Subject: What Racter Wrote (from Harper's Magazine, Nov. 1984)

From "The Policeman's Beard is Half Constructed," a collection of
prose and poetry  written by a computer.  Racter, as the computer is
called, was programmed (by William Chamberlain) with rules of English
grammer and syntax, as well as an 8000-word vocabulary.  Warner Books,
publisher of "The Policeman's Beard," claims that it is the first book
written by a computer.


Bill sings to Sarah.  Sarah sings to Bill.  Perhaps they will do other
dangerous things together.  They may eat lamb or stroke each other.
They may chant of their difficulties and their happiness.  They have
love but they also have typewriters.  That is interesting.

PAUL: Hello, Joan.

JOAN: Hello, Paul.

PAUL: Joan, I was fantasizing that a white crow flew across the black
sky of night.

JOAN: I too have been thinking of a white crow.  It flies in the night
and it flies in the day.  It incites my dreams.  My fantasy is that I
can soar with it.

PAUL: A single white crow or many winging eagles.  All inflame my
dreams and cogitations.  A single bird or many birds are in my dreams.

JOAN: Paul, have you bolted meat today?

PAUL: I have bolted lamb and chicken.

JOAN: Have you drained sherry?

PAUL: I have sipped seltzer.

JOAN: Shall we show our love by the reflections of our desire, or
shall we dream of a white crow?

PAUL: The dream is a reflection of our passion just as the white crow
is a reflection of our ambiguity.

JOAN: My desire to incite myself in my dreaming is also a reflection
of ambiguity.

PAUL: Well chanted, Joan.  I craftily desire you.

------------------------------

Date: Wed 14 Nov 84 17:55:17-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminars - Speech Acts

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

                ABSTRACT OF TODAY'S SEMINAR
  ``Natural language from the Standpoint of Artificial Intelligence''

An intelligent individual, human or computer program, must act on the basis
of what it believes in advance modified by what it observes and what it
learns from linguistic communication.  Thinking about how the achievement
of goals is helped by communication leads to a somewhat different point of
view from one derived mainly from study of the corpus of spoken and written
language.  Namely,
  1. Communication should be regarded as a modifier of state of mind.
  2. The most basic form of communication is the single word sentence
        uttered under conditions in which the speaker and hearer share enough
        knowledge so that the single word suffices.  The complete sentence
        develops under conditions in which the speaker and the hearers share
        less context.
  3. Many of the characteristics of language are determined by so far
        unrecognized requirements of the communication situation.  They will
        apply to machines as well as people.
  4. An effort to make a Common Business Communication Languages for
        commercial communication among machines belonging to different
        organizations exhibits interesting problems of the semantics of
        language.
                                                ---John McCarthy


                SUMMARY OF LAST WEEK'S SEMINAR

Phil Cohen of SRI gave a seminar in which he claimed that illocutionary act
recognition is not necessary for engaging in communicative interaction.
Rather, engaging in such interaction requires intent/plan recognition.  In
support of this claim, he presented a formalism, being developed with Hector
Levesque (Univ.  of Toronto), that showed how illocutionary acts could be
defined in terms of plans --- i.e., as beliefs about the conversants' shared
knowledge of the speaker's and hearer's goals and the causal consequences
of achieving those goals.  In this formalism, illocutionary acts are no
longer conceptually primitive, but rather amount to theorems that can be
proven about a state-of-affairs.  As an illustration, the definition of a
direct request was derived from an independently-motivated theory of action,
rather than stipulated.  Just as one need not determine if a proof
corresponds to a prior lemma, a hearer need not actually characterize the
consequences of each utterance in terms of the IA theorems, but can simply
infer and respond to the speaker's goals.  However, the hearer could
retrospectively summarize a complex of utterances as satisfying an
illocutionary act.  Moreover, it was claimed that the framework can
characterize a range of indirect speech acts as lemmas, which can be derived
from and integrated with plan-based reasoning.  The discussant, Ivan Sag,
related the theory to Gricean maxims of conversation, and to the ``standard''
view of how pragmatics fits into a theory of linguistic communication.


                        NEW CSLI REPORT

A final edition of Report No. CSLI-9-84, ``The Implementation of Procedurally
Reflective Languages'' by Jim des Rivieres and Brian Cantwell Smith, has just
been published. Copies may be obtained by writing to Dikran Karagueuzian
at the Center (Dikran at SU-CSLI).

------------------------------

End of AIList Digest
********************

∂15-Nov-84  0125	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #155    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Nov 84  01:22:45 PST
Date: Wed 14 Nov 1984 22:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #155
To: AIList@SRI-AI


AIList Digest           Thursday, 15 Nov 1984     Volume 2 : Issue 155

Today's Topics:
  News - Recent AI Articles,
  AI Tools - FRL Source & New Lisp for VAXen,
  Logic Programming - Compiling Logic to Functional Programs
  Algorithms - Malgorithms,
  Seminar - Inductive Learning
----------------------------------------------------------------------

Date: Mon 12 Nov 84 15:30:28-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Recent AI Articles

Oscar Firschein has called my attention to three articles on AI in
the November issue of Datamation.  The Overselling of Expert Systems,
by Gary R. Martins, is a devastating attack on the current crop of
production-rule interpreters.  The Blossoming of European AI, by
Paul Tate, is an informative piece on expert systems development
in Europe that is much more positive about the approach, but ends
with the appeal "Please, be honest."  AI and Software Engineering,
based on Robert Kowalski's 1983-84 SPL-Insight Award lecture, advocates
logic-based programming; I found the presentation discursive and
inconclusive, but there's a nice example concerning the expression
of British citizenship laws as logical rules.


Martins makes some very good points about current expert systems
and development shells (e.g., "the blackboard model of cooperating
expert processes" is just a longer name for the old COMMON storage
facility in FORTRAN), but he neglects hierarchical inference (as in
MYCIN and PROSPECTOR), learning and self-modification (as in AM/EURISKO),
and the benefits of new ways of looking at old problems (hypothesize
and test in DENDRAL, "parallel" activity of cooperating experts
in HEARSAY).  He describes rule-based systems as clumsy, resource-hungry,
and unsuitable for complex applications, and regards them as a result
of confusing AI science (simple cognitive models, LTM and STM, etc.)
with engineering.  He does favor the type of serious AI development
being pursued by DARPA, but seems to think that most of the current
"expert systems" will be limited to applications in the personal
computer market (applications that could have been coded just as
easily with decision tables, decision trees, or other methodologies).

Martins also tells why he thinks the few experts systems mentioned above
(and also R1/XCON-XSEL) have been so successful.  His points are worth
considering:

  1) Brilliant programmers.
  2) Easy or carefully delimited problems.
  3) Plenty of time and funding in a favorable environment.
  4) Developers were not saddled with expert system tools!
     They develop systems to fit problems, not the other way around.
  5) Luck -- other promising systems didn't make it to the finish line.
  6) Public relations; some of these wonders are less than the public
     believes them to be.


For a much more positive outlook on expert systems, or at least on
knowledge-based systems, see Frederick Hayes-Roth's overview in
the October issue of IEEE Computer.  (One minor typo: Figure 5
should have an instance of PROLOG changed to POPLOG.)

                                        -- Ken Laws

------------------------------

Date: 13 Nov 1984 21:29-EST
From: milne <milne@wpafb-afita>
Subject: Frl Source


                        MIT FRL Available

I have become the "keeper of the source" for FRL, originally from
MIT and implemented in FranzLisp. The system includes a machine-readable
version of the manual and a demo of the tower of hanoi and an atn.
I am happy to distribute the sources free of charge, subject to the
following conditions:
        1. Although I will distribute it, I am not a maintainer of the
software. I do not guarantee it is free of bugs (but I think it is),
and I do not have time to fix problems.
        2. I can write UNIX tar tapes only. Sorry, no mail or FTP
transfers. (The source is about 95 files)
        3. It includes a UNIX and vms make file, but I can write only
tar tapes.
        4. To get a copy, send a blank tape to:
                Dr. Rob Milne
                AFIT/ENG
                WPAFB, OH 45433
        I will write the tape and send it back.
cheers,
Rob Milne
Director, AI Lab
Air Force Institute of Technology
milne@wpafb-afita

------------------------------

Date: Tue, 13 Nov 84 11:20:19 -0200
From: jaakov%wisdom.BITNET@Berkeley (Jacob Levy)
Subject: Announcement of new Lisp for UN*X 4.x VAXen

I don't know if this is the appropriate place for such an announcement,
but here goes, anyway :-


        YLISP, a Coroutine-based Lisp System for VAXen.
        -=============================================-

        A friend of  mine, Yitzhak  Dimitrovski, and  myself, wrote a Lisp
system for UN*X 4.x systems on VAXen. It has the following features :-

        o - Coroutines and  closures. The  system uses  these to implement
            the user-interface, single-stepping and  error-handling.  It's
            easy to write a scheduler and time-share YLISP between  two or
            more user programs.
        o - Multiple-dimension arrays.
        o - Multiple name  spaces (oblists) arranged  in a tree hierarchy.
            This is similar to the Lisp Machine facility.
        o - Defstruct structure definition package.
        o - Flavors object-oriented programming tools.
        o - User-extensible  evaluator (it is  possible to (re)define  the
            actions of 'eval', 'apply' and 'print'  relative to all  user-
            and pre-defined types).
        o - Integer arithmetic. No floating-point, sorry. don't think that
            that's  really  necessary, but *could*  be hacked. No  BIGNUMs
            either.
        o - Good user-interface with history, sophisticated error handling
            and function-call and variable-assignment tracing facilities.
        o - Extensive library of ported and user-contributed programs,such
            as a variant of the Interlisp  structure editor, 'loop' macro,
            'mlisp' Pascal-like embedded language, etc.
        o - Compiler  that  generates efficient native  assembler code for
            the VAXen. The  compiler is provided as a separate program,due
            to size  considerations. The compiler is  written entirely  in
            Lisp, of course.
        o - Extensive online documentation, as well as  a 400-page  manual
            describing the whole system from a programmer's point of view.

        The system is named  'YLISP', and was written for 4.1 when we were
students at the Hebrew University  at Jerusalem. Since  then, Yitzhak  has
left  for the  US and  is  currently a  Ph.D. student in  Prof. Schwartz's
Supercomputer group at Courant. I have continued to  develop the system on
my own, and have ported it to UN*X 4.2.

        I am looking for a site that is willing to handle the distribution
of this software from the US, by letting  one FTP it  from their computer.
Alternatively, I am also willing to supply people  with magtapes of YLISP,
for the cost of the tape and handling charges (about 70$ a piece).  If you
are interested, please respond by electronic mail to one of  the addresses
given below. I will be  ready  to  start distributing  the  system in  two
weeks' time.

        Rusty Red (AKA Jacob Levy)

        BITNET:                         jaakov@wisdom
        CSNET and ARPA:                 jaakov%wisdom.bitnet@wiscvm.ARPA
        UUCP: (if all else fails..)     ..!decvax!humus!wisdom!jaakov

------------------------------

Date: Mon 12 Nov 84 23:22:28-MST
From: Uday Reddy <U-Reddy@UTAH-20>
Subject: Compiling Logic to Functional Programs

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

The only work I know on compiling logic to functions:

1. Bellia , Levi, Martelli: On compiling Prolog programs on
demand driven architectures,  Logic Programming Workshop,
Albufeira, '83

2. Reddy: Transformation of logic programs to functional
programs, ISLP, Atlantic City, 84.

The two pieces of work are similar.  They should be distinguished
from other pieces of work cited by Webster (Lindstrom and Panangaden,
Carlsson, Bruce Smith) which interpret logic in a functional language
rather than compile a logic language into a functinal language.

The translation approach has limitations in that it needs mode
annotations (either from the programmer or chosen by the compiler)
and it cannot handle "logical variables".  I don't know of any work
that overcomes these limitations.  Personally, I believe they cannot
be overcome.  One can probably prove this assertion, provided one
can formalize the difference between translation and interpretation.

Combinator calculus is equivalent to lambda calculus, and there are
translators available from one to the other.  So, using combinators
neither simplifies nor complicates the problem.

-- Uday Reddy

------------------------------

Date: 12 Nov 84 08:56:04 PST (Monday)
From: Nick <NNicoll.ES@XEROX.ARPA>
Subject: Re: Badgorythms

What makes the following worse than a normal software badgorythm is that
it is implemented in the Language compiler...

"Another reason for bloated code:  increment a byte in memory can be
done in a single instruction, but they load the byte into a register,
extend it to a word, extend that to a long, add one, and then store the
low 8 bits of the long back into memory."

This gem came from the following msg;

  From: <Devon@MIT-MC.ARPA>
  Subject: Macintosh language benchmarks
  To: homeier@AEROSPACE.ARPA, info-mac@SUMEX-AIM.ARPA

I have been using the [notoriously awful] Whitesmith C compiler
available from "software toolworks" or some similar name.  It does work,
and there are header files defining all the data structures, and
interface files so you can make all the ROM calls.  I haven't found any
serious bugs, but code bloat is amazing.  One reason is that Apple's
linker is a crock that doesn't have have the concept of scanning a
library!  Instead it blithely loads everything contained in each library
file (which you must specify yourself -- blech!) regardless of whether
it is called for or not.  Another reason for bloated code:  increment a
byte in memory can be done in a single instruction, but they load the
byte into a register, extend it to a word, extend that to a long, add
one, and then store the low 8 bits of the long back into memory.


\\ Nick

------------------------------

Date: Mon 12 Nov 84 14:57:15-PST
From: Jean-Luc Bonnetain <BONNETAIN@SUMEX-AIM.ARPA>
Subject: malgorithms

I just received a msg from Andrei Broder (Broder@decwrl) saying that he and
George Stolfi wrote a paper called "Pessimal Algorithms and Simplexity
Analysis" which is to appear in the SIGACT news. Maybe people who expressed
interest in my msg will find this "joke paper" (Andrei's term) worth reading.

jean-luc

------------------------------

Date: 13 Nov 84 08:43 PST
From: Todd.pasa@XEROX.ARPA
Subject: Malgorisms (What malgorithms were before LISP)

        Yet another class of malgorithms is generously provided by the
INTERLISP-D implementation of LISP ... algorithms that look like
malgorithms but really aren't. An example:

To strip an M element list from the top of a larger list X, a seemingly
logical approach would be to take the first element of X, append it to
the second, and so on until the Mth. In INTERLISP-D, a faster way to
accomplish this is to take the difference of the larger list X and its
tail beginning at element M+1. In other words, to "subtract" the list
lacking the elements you want from the full list. The malgorithm code
appears as:

(LDIFF X (NTH X (ADD1 M)))

The "logical" code as:

(FOR I FROM 1 TO M COLLECT (CAR (NTH X I)))


        As is shown below, the "malgorithm" is actually a faster way to
solve the problem. Timed executions for 100 sample runs yielded the
following results:


                                "Malgorithm"       "Logical method"

                  M=4         .00114 sec.               .00127
                  M=30        .00902                    .0214
                  M=100       .0301                     .170


        The method breaks down when you try to extract sublists from
arbitrary positions inside larger lists ... execution of a "logical"
method similar to the above is MUCH faster. However, I am still amazed
that a malgorithm as seemingly ridiculous as this one is can be so
efficient for even a special case.


                                                --- JohnnyT

"Things are more like they used to be now than they ever were"

------------------------------

Date: 11 Nov 1984  17:03 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Inductive Learning

           [Forwarded from the MIT bboard by Laws@SRI-AI.]


Inductive Learning: Recent Theoretical and Experimental Results
                    Ryszard Michalski

Wednesday   November 14     4:00pm      8th floor playroom


Inductive learning is presented as a goal-oriented and resource-constrained
process of applying certain rules of inference to the initial observational
statements and hypotheses. This process involves new type of
inference rules, called "generalization rules."
In contrast to truth-preserving deductive rules, inductive
generalization rules are falsity-preserving.

Two types of inductive learning are distinguished,
learning from examples (concept acquisition or reconstruction)
and learning by observation (concept formation and descriptive
generalization). Learning from
examples in turn divides into "instance-to-class" and
"part-to-whole" generalization.

We will briefly describe recent experiments with two inductive
learning systems:
1 - for learning from examples via incremental concept refinement, and
2 - for automated formation of classifications via conceptual clustering.

------------------------------

End of AIList Digest
********************

∂15-Nov-84  2000	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #156    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Nov 84  20:00:27 PST
Date: Thu 15 Nov 1984 16:40-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #156
To: AIList@SRI-AI


AIList Digest            Friday, 16 Nov 1984      Volume 2 : Issue 156

Today's Topics:
  Programming Languages - Horror Stories,
  Algorithms - Interlisp-D "Malgorithm",
  AI Tools - DEC Software Agreements & Japanese Lisp Machines,
  Seminars - Logo for Teaching Language &
    Knowledge Representation and Temporal Representation
----------------------------------------------------------------------

Date: 14 Nov 84 20:45:32 EST
From: Edward.Smith@CMU-CS-SPICE
Subject: Programming Language Horror Stories

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

In a couple of weeks I'm going to give my last lecture in Comparative
Programming and as a way of motivating these undergraduates about the
importance of the consideration of language design in their future work I
was going to present some examples of particularly BAD (ugly or dangerous or
however you wish to define that) design. So, if you have any favorite horror
story about some programming language you'd like to contribute I would
appreciate it - I will put all the good ones up in a file somewhere. These
stories should be SHORT and to the point, and could be things like these
classics:
  - the property of FORTRAN to compile and execute (generally without
complaining) a DO loop construct without a comma
  - just about ANY one-line APL program (my favorite being the one-line game
of LIFE attributed to Marc Donner)
  - the use of the space character in SNOBOL as (a) the separator between
labels and statements, (b) concatenation operator, (c) pattern matching
operator, or (d) separator for the pattern match assignment operator "."
  - (FORTRAN's full of them) the property of early FORTRAN's to change the
value of "constants" like 5 to say 3 by an interesting parameter passing
mechanism

Please send to ets@cmu-cs-spice. Thanks in advance.

------------------------------

Date: 15 Nov 84 12:39 PST
From: JonL.pa@XEROX.ARPA
Subject: Interlisp-D "malgorithm"?

Regarding the "malgorithm" and "Logical method" proposed by
Todd.pasa@XEROX.ARPA:  using the NTH function repeatedly on a list of
elements (as opposed to an array or "vector" of elements) has got to be
a classic "malgorithm".  The access time for selecting the n'th element
of a list is proportional to n, whereas the similar time for arrays or
vectors should be essentially constant.  The repeated selection evident
in Todd's example converts a linear algorithm into a quadratic one.

Just for the record, let me propose what I (and I suspect many other
long-time "Lisp lovers" would have considered to be the "logical"
algorithm):

    (for ITEM in X as I to M collect ITEM)

A little analysis would show this to be asymptotically better than the
proposed "malgorithm" by a factor somewhere between 1 and 2 -- 2 because
the "malgorithm" traverses the M-prefix of the list X twice, and 1
because the the CONS time cost may be made arbitrarily high, thereby
occluding any other effect.

-- JonL White --

------------------------------

Date: 15 Nov 84 0524 EST
From: Dave.Touretzky@CMU-CS-A.ARPA
Subject: DEC news release

The following two paragraphs come from a medium-length article in DEC's
Large System News.


        Digital Signs Marketing Agreements With Six AI Firms

Digital has signed agreements with a number of leading independent producers of
Artificial Intelligence (AI) software to market cooperatively their products
on VAX computers and personal computer systems.

Independent AI software producers include the Carnegie Group, Inc.; Digital
Research, Inc. (DIR); Gold Hill Computers; Inference Corp.; Prologia; and
USC Information Sciences Institute (ISI).  AI software packages developed to
run on Digital's computers include Inference's ART, Gold Hill's GCLISP,
ISI's Interlisp, Prologia's PROLOG II, and the Carnegie Group's SRL+ and PLUME.


In two other articles in the same newsletter, Digital announced availability
of VAX Lisp 1.0 as a fully-supported Common Lisp product, and availability
of OPS5 as a supported product.  Digital's OPS5 is written in Bliss-32 for
performance reasons; it includes both an interpreter and a compiler.

------------------------------

Date: 14 Nov 84 09:31:35 PST (Wed)
From: Jed Marti <marti@randgr>
Subject: Japanese Lisp Machines.

  I just saw the request for information about the Fujitsu Alpha. I
recently spent a week in Japan as a guest of the RIKEN institute which
provided a tour of some of the local Tokyo efforts in this direction.
Perhaps it would be of interest to the AIList readers.

Jed Marti.



                      Japanese Lisp Machines

The RSYMSAC conference held at the Riken institute in Saitama, Japan on
August 21-22, provided an opportunity for a close view of Japanese
efforts to construct very fast machines for running large scale
symbolic algebra and AI programs. Four of us toured two computer
centers and three Lisp machine construction projects, talking to
implementors and trying our favorite test programs. This short report
describes the state of their systems and the Japanese symbolic algebra
environment.

                           FLATS at Riken

The Riken institute conducts research in the physical sciences and
operates a Fujitsu M380H (an IBM 370 look-alike) providing both time
sharing and batch services. During the day, computer algebra system
users access Cambridge Lisp running REDUCE 3.1. The symbolic
computation group operates a VAX 11/750 running VMS, a host of 16 bit
micro-computers, and the FLATS machine.

The symbolic computation group officially unveiled FLATS (Formula Lisp
Association Tuple Set) at the conference. The Mitsui Ship Building
Company constructed the hardware based on designs of the Riken group.
Built from SSI ECL components, the CPU executes a micro-instruction
every 50 nanoseconds and a Lisp instruction every 100 nanoseconds from
a 300 bit by 256 word micro store and 8 megabytes of 450 nanosecond
main memory. Over 70,000 wires connect the back plane making
conventional hardware debugging impossible. The engineers exercise
modules on a special test jig or through the attached support
processor.

The hash code generation hardware sets FLATS apart from conventional
Lisp machines. It computes a hash code in the basic machine cycle time
for extremely fast property list access and CONS cell generation.
Improvements in execution speed and program clarity more than offset
the loss of the RPLACA and RPLACD functions on hashed CONSes.

The designers increased speed with a number of special features:

   3 separate cache memories for instructions, data, and stack

   special micro-coded instructions for the garbage collector and
     big numbers

   CALL, JUMP and RETURN executed in parallel with other instructions

   hardware type checking in parallel with data operations

   3 address instruction codes

   hardware support for paging

   data path width of 64 bits

The FLATS machine, without hash CONS and property lists, runs REDUCE
3.0 at about IBM 3033 speeds. Several papers presented at RSYMSAC
described the status of FLATS and the design of the next FLATS machine
that the group hopes to construct from Josephson Junction circuits
[2-3].

               University of Tokyo Computer Center

We visited the University of Tokyo Computer Center to find out more
about UTILISP (University of Tokyo Interactive LISP) implemented by the
Department of Mathematical Engineering and Instrumentation Physics
[1]). Probably one of the largest academic installations in the world,
the center operates two Hitachi M280 dual processors (roughly
equivalent to an IBM 3081) each with 32 megabytes of main storage and a
Hitachi M200H with 16 megabytes of main storage. A Hitachi S810/2
vector processor with 64 megabytes of main memory and a VAX 11/780 with
4 megabytes complement the general purpose machines. On-line storage
consists of 48 gigabytes on disk, and 37 gigabytes in data cells. The
center emphasizes user convenience. Users mount their own tapes, take
output off printers, read their own card decks (we didn't actually see
anyone do this, but the machine was there), tear off plots and so on.
The lightly loaded machines run an average of only 4,000 jobs per day.
Users need not wait for terminals and other equipment, an enviable
situation indeed.

UTILISP resembles MacLisp. An effort to transport MACSYMA to UTILISP
suffers only from the lack of built-in big number arithmetic.

                            Fujitsu ALPHA

A long train and subway ride brought us to the third tour stop, the
Fujitsu Laboratories in Kawasaki, home of the Lisp machine ALPHA [4-5].
The ALPHA offloads time sharing symbolic processing jobs from IBM style
mainframes. More than one ALPHA can be connected to a single mainframe,
which supplies I/O device, filing system, editing and operating system
support.

The ALPHA has 8 megabytes of real memory with a 16 megabyte virtual
address space. Memory and data buses are 32 bits wide with Lisp items
composed of an 8 bit tag and 24 bit value. The ALPHA processor has a
high speed hardware stack of 8k words with special hardware for
swapping segments to and from slower memory. The division of the stack
into blocks permits high speed switching between different processes.
To support tagged data items, a micro-instruction jump based on the 8
bit tag is implemented. The ALPHA machine performs data calculations by
masking off the tag bits in hardware, rather than software. The machine
has over 7700 STTL, 64k bit RAMs and 4k high speed RAMs.

     Micro Instructions - 48 bits wide, 160 ns, 16k words.
     Main Memory - Virtual 16 M words, Real 8 M words,
          Page size 4 K bytes.
     Stack - Logical stack 64 K words, Hardware stack 8 K words,
          Swapping block size 2 K bytes.

The ALPHA runs UTILISP and has an interpreter, compiler, and copying
garbage collector. Fujitsu claims the ALPHA runs three times faster
than the Symbolics 3600 and 5 times faster than DEC 2060 MACLISP.
Fujitsu uses the ALPHA for CAD, machine translation, and natural
language understanding.


        ELIS - Nippon Telegraph and Telephone Public Corporation

Nippon Telegraph and Telephone Public Corporation demonstrated the ELIS
machine and TAO language a "harmonic" mixture of Lisp, Smalltalk, and
Prolog, to quote the authors [6]. A PDP 11/60 provides file and
operating system support while the ELIS hardware performs the list
processing functions. The ELIS hardware features 32 bit items with 8
bit tags providing for 16 million items (128 megabytes). The basic
microcycle time is 180 ns in 32k of micro-instructions 64 bits wide.
Main memory is 4 megabytes with an access time of 420 ns and a special
system stack of 32k 32 bit items. Deep binding is used and multiple
processes are supported by stack groups, and the cpu switches between
contexts very fast (2 microseconds unless there is some stack
swapping). For identical tasks programmed in the three different
paradigms, the procedural version provides the most speed with the
object oriented version about 1.1 times as slow and the logic version
about twice as slow.

Acknowledgement: I would like to thank Dr. N. Inada of Riken for
organizing both RSYMSAC and the tour.

List of References
1. Chikayama, Takashi, `UTILISP Manual', Technical Report METR 81-6
    (September 1981), Department of Mathematical Engineering and
    Instrumentation Physics, University of Tokyo, Bunkyo-Ku, Tokyo,
    Japan.
2. Goto, E., Shimizu, K., `Architecture of a Josephson Computer
    (FLATS-2)', RSYMSAC, Wako-shi, Saitama, 351-01 Japan, 1984.
3. Goto, E., Soma, T., Inada, N., et al, 'FLATS: A Machine for
    Symbolic and Algebraic Manipulation', RSYMSAC, Riken, Wako-shi,
    Saitama, 351-01 Japan, 1984.
4. Hayashi, H., Hattori, A., Akimoto, H., `ALPHA: A High-Performance
    LISP Machine with a New Stack Structure and Garbage Collection
    System', Proceedings of the 10th Annual International Symposium
    on Computer Architecture, pages 342-347.
5. Hayashi, H., Hattori, A., Akimoto, H., `LISP Machine "ALPHA"',
    Fujitsu Scientific and Technical Journal, Vol. 20, No. 2,
    pages 219-234.
6. Okuno, H. G., Takeuchi, I., Osato, N., Hibino, Y., Watanabe, K.,
    `TAO: A Fast Interpreter-Centered System on Lisp Machine ELIS',
    Proceedings of the 1984 Conference on LISP and Functional
    Programming.


Jed Marti  MARTI@RAND-UNIX

------------------------------

Date: 9 November 1984 1359-EST
From: Jeff Shrager@CMU-CS-A
Subject: Seminar - Logo for Teaching Language

           [Forwarded from the CMU bboard by Laws@SRI-AI.]


Subject:  Guest speaker on teaching language with LOGO
Source: charney (davida charney @ cmu-psy-a)

               English Department -- Guest Speaker

NAME:   Wallace Feurzeig  (Bolt, Beranek and Newman)
DATE:  Friday, November 16
TIME:  9 - 10:30 am  (There will be coffee and doughnuts.)
PLACE:  Adamson Wing in Baker Hall

TITLE: Exploring Language with Logo

The talk gives examples of materials from our forthcoming book "Exploring
Language with Logo" to be published by Harper and Row o/a first quarter,
1985, co-authored by Paul Goldenberg and Wallace Feurzeig.  The book
attempts to develop a qualitatively different approach to the teaching of
language arts in schools.  Our approach is based on two major intellectual
developments -- the theory of generative grammar in formal linguistics and
the invention of programming languages.  The important new idea from
linguistics is that a grammar can be used as a constructive instrument to
generate sentences, in contrast to the conventional school experience of
grammar as an analytic device, a set of tools for parsing sentences to
determine whether or not they are instances of "good" English.  This shift
has enormous psychological and pedagogical benefits: it switches the
learner's focus and viewpoint from rule learner to language creator.  At the
same time, it provides a distinctly different, more accessible and
acceptable way of introducing the formal structures of language and the
regularities and rules describing these structures.

The other major intellectual development, programming languages, provides
the most distinctive and radical departure from the present language arts
course. Our approach depends fundamentally upon programming ideas and
activities.  In our presentation, the key and central language concepts are
introduced and developed as Logo programs.  Teachers and students are engaged
in programming projects throughout.  The use of a programming language in the
English language classrooom makes the idea of generative grammars concrete
in tasks readily accessible to schoolchildren.  Moreover, in the environment
of programming, grammar models are transformed from highly abstract
formalisms into runnable objects in semantic situations that are meaningful
and interesting to students.  For example, students can create Logo programs
that simulate the grammar of gossip, puns, jokes, love letters, baby talk,
proverbs, quizzes, conversational discourse, poems of various forms and
expressive styles, and many other kinds of texts.  Examples will illustrate
the approach and materials at three levels: the structure of sentences,
structures within a word, and larger structures.

------------------------------

Date: 13 November 1984 11:12-EST
From: Rosemary B. Hegg <ROSIE @ MIT-MC>
Subject: Seminar - Knowledge Representation and Temporal Representation

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

    COOPERATIVE COMPUTATION, KNOWLEDGE ORGANIZATION AND TIME

                         John K. Tsotsos

                 Department of Computer Science
                      University of Toronto
                    Toronto, Ontario, Canada


                DATE:    Friday, November 16, 1984
                TIME:    1.45 pm  Refreshments
                         2.00 pm Lecture
               PLACE:    NE43-7th Floor Playroom


     A cooperative processing scheme is presented that deals with
time-varying information.  It operates over a network of temporal
concepts, organized along common representational axes:generali-
zation,  aggregation,  similarity and temporal precedence.  Units
in this network are organized into computation layers, and  these
layers  are  conceptualized as "recognizing" concepts that can be
organized along  a  generalization  /  specialization  dimension.
Thus  elements of both "localist" and "distributed" views of con-
cept representations are present.  Static and dynamic data in the
same  way  -  as samples over time, and thus, sampling issues are
directly addressed.  This process is  a  time-varying  non-linear
optimization  task;  it differs from past cooperative computation
schemes in three respects: a) our information is not uniform  but
rather  different concepts are represented at different levels of
the hierarchies; b) there are multiple interacting networks, each
organized  according to different semantics; c) the data is time-
varying and more importantly,  the structure over  which  relaxa-
tion is performed is time-varying.  The cooperative process to be
described has the qualitative properties we believe are desirable
for   temporal   interpretation,  and  its  performance  will  be
described empirically, and in a qualitative fashion  through  the
use of several examples.

HOST:  Prof. Peter Szolovits

------------------------------

End of AIList Digest
********************

∂18-Nov-84  1358	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #157    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 18 Nov 84  13:57:07 PST
Date: Sun 18 Nov 1984 12:18-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #157
To: AIList@SRI-AI


AIList Digest            Sunday, 18 Nov 1984      Volume 2 : Issue 157

Today's Topics:
  Conference - Expert Systems Symposium,
  Expert Systems - Skinner,
  Algorithms - Scheduling Algorithm Question & Malgorithm,
  Logic Programming - Compiling Logic to Functions,
  Linguistics - In Praise of Natural Languages,
  Seminars - Conceptual Change in Childhood &
    Relational Interface, Process Representation &
    Partial Winter Schedule at NCARAI,
  Course & Conference - Logic, Language, and Computation Meeting
----------------------------------------------------------------------

Date: 26 Oct 1984  9:37:06 EDT (Friday)
From: Marshall Abrams <abrams@mitre>
Subject: Expert Systems Symposium

I am helping to organize a Symposium on Expert Systems in the Federal
Government. In addition to papers, I am looking for people to serve on
the program committee and the conference committee, and to serve as
reviewers and session chairmen. The openings on the conference committee
include local arrangements, publicity, and tutorials.

Please contact me or the program chairman (karma @ mitre) with
questions and suggestions. The call for papers is available
on request.

Marshall Abrams

------------------------------

Date: Friday, 16 Nov 84 11:34:27 EST
From: shrager (jeff shrager) @ cmu-psy-a
Subject: Quote for our times...

"If Skinner were born in our time, he'd have been an expert
 systems researcher."

                        -- Peter Pirolli  11/16/84 in the heat of
                                          an argument.

(Quoted with permission)

------------------------------

Date: 15 Nov 84 11:38:40 EST
From: DIETZ@RUTGERS.ARPA
Subject: Scheduling algorithm questions

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

I want an online algorithm for preemptive scheduling on a single processor
with release times and deadlines (no precedence relations).  This problem
is trivial offline, but I want to be able to add new jobs (or determine
they cannot be added) in polylog time.  Has anyone looked at this problem?

Paul Dietz (dietz@rutgers)

------------------------------

Date: Fri, 16 Nov 84 15:30 CST
From: Boebert@HI-MULTICS.ARPA
Subject: Old high-level malgorithm


A (very) early IBM FORTRAN compiler contained the following jewel of an
error message:

"COLUMN cc OF CARD nnnn CONTAINS A 12-4 PUNCH MINUS SIGN INSTEAD OF AN
11 PUNCH MINUS SIGN.  CORRECT AND RESUBMIT."

This was a fatal error.

(For the youngsters, "12-4 punch" and "11 punch" refer to the patterns
of holes in a card column; I believe the 12-4 was officially a "dash".
Also, FORTRAN only spoke capital letters; this was an Eisenhower-era
compiler, and shouted at you in proper authoritarian style.)


[Speaking of user-interface styles:
Commodore Grace Hopper tells of the time a Navy (or perhaps just
Pentagon) programming team realized that a computer could "speak German"
if you just replaced JUMP with SPRUNGE, etc.  (Even JUMP was a novelty
at this time: it may have been the earliest COBOL compiler prototype.)
They set up a demo and passed around a memo saying "Come see our computer
compile this German program."  The brass were not amused at the idea
of an American military computer being trained to speak German, and the
team had to distribute another memo saying that the first was just a
bad joke -- no computer could possibly understand German!  -- KIL]

------------------------------

Date: Thu 15 Nov 84 19:31:17-MST
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Compiling logic to functions

To add to my previous message on the topic, the fact that the effect of
logical variables cannot be achieved in functional languages is not a
linguistic limitation but an operational one.  Specifically, all logic
predicates are boolean-valued functions.  So, all Horn clauses can be
directly translated into function equations.

        A :- B1, ..., Bn.       =>      A = and(B1,...,Bn)
        A.                      =>      A = true

However, in traditional functional languages the translated logic programs
can only be used for rewriting.  They cannot be used to solve goals with
variables in them.

If "narrowing" rather than "rewriting" is used as the operational semantics
of functional programs, they too can be used to solve goals and the effect
of logical variables is achieved.  For more details, see

        Hullot, Canonical forms and unification, Conf. Automated Deduction,
        1980.

        Lindstrom, Functional programming and the logical variable, to
        appear, POPL 85.

        Reddy, On the relationship between logic and functional languages,
        to appear in, Degroot, Lindstrom, Functional and Logic programming,
        Prentice-Hall, 85.

Uday Reddy

------------------------------

Date: Thu, 15 Nov 84 11:01:51 PST
From: April Gillam <gillam@AERO2>
Subject: In Praise of Natural Languages

Rick Briggs raises some very interesting questions about what is a natural
language, so I thought I'd air my views.  Natural language, in its broadest
sense, should include any communication between man and/or animal for which
there is an underlying common belief system. I'd even go so far as to
include non-verbal communications. I'm not a linguist, so this is just how
I view the term.  When dealing with machine translation, a working
definition restricting it to verbal or written communications of course
makes more sense.

It would be interesting if at some time we could interpret body language
well enough to have a computer analyze what a person says verbally and
bodily. (When pattern recognition has matured!) There are certainly some
people who have the sensitivity and receptiveness to do the interpretation.

Reading of some of Dr. Kubler-Ross's work, it is an amazing
learning experience to see the level of interpretation which she does with
dying patients, many of whom cannot express directly their knowledge of
their imminent death, however they still have a strong desire to
communicate this to someone, using an analogy or some indirect manner. She
writes of a terminally ill man who could not get out of bed without the use
of his cane, who one day said to take the cane away, Shortly after which he
died. This man was letting her know that the time had come. But few, if any
of us pick up on the cues. Do we really expect a computer to do this? It
also points up how vital context is to understanding.

It doesn't seem plausible to me that any language can express ALL "aspects
of the natural world"? In Indian (from India) languages there are words for
levels of consciousness (eg. samadhi), for energy centers of the body (eg.
chakras), etc.  In English we have sophisticated words pertaining to
weaponry, to real estate, etc. Do you think an aborigine would have a word
or concept for garbage recycling? (Or coke bottle?) What I'm trying to say
is, language is cultural (as my friend Ellen, an anthropolgist, succinctly
put it).

I find it hard to believe that Sastric Sanskrit, or any other language, can
contain the concepts of all of humanity's experiences. Have we ourselves
experienced enough of our reality to be able to express it, and does the
person we talk to have a common enough set of experiences to interpret what
we say? There are enough misunderstandings when we both speak the same
language, that I doubt another language will render a semantically exact
translation.  How can the color scheme be described to the land of the
blind? There is also a flavor to words. For eg., cabron in Spanish, or the
phrase "curses, foiled again" to those of us who've seen the Perils of
Pauline or comic strips.

I don't see it as a virtue, to be able to express oneself unambiguously.
Part of the power and beauty of language is the ability to make
multi-leveled statements, double entendres, analogies, etc.

It's interesting what Bill Frawley says about a change which complicates a
language being compensated by by a simplification elsewhere. On some level
that is aesthetically pleasing, but I have no feel for whether that would
be the case.

In the proceedings of this year's AAAI conf. there was an interesting paper
in which a micro-environment (a context) of words, likely references and
multiple meanings for a particular topic was set up. If the topic was
Italian food, there'd be some notions of restaurants and pizza and such.
Then if the statement "Hold the anchovies" was encountered, it would be
known that it means "Do not put anchovies on the pizza", as opposed to
"Grasp the anchovies in your hand."  I don't have the reference handy, but
it looked like a good idea, as well as a lot of work.
                        - April Gillam

------------------------------

Date: Tue, 13 Nov 84 14:40:54 pst
From: chertok%ucbcogsci@Berkeley (Paula Chertok)
Subject: Seminar - Conceptual Change in Childhood

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, November 20, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Susan  Carey;  MIT  Psychology   Department;
                Center  for Advanced Study in the Behavioral
                Sciences

TITLE:          ``Conceptual Change in Childhood''

ABSTRACT:       In the tradition of recent Cognitive Studies
                tutorials,  this  paper is a tutorial on the
                proper description of cognitive development.
                At  issue  is  the  status of the claim that
                young children think differently from  older
                children  and  adults.   This claim is often
                contrasted  with  the  claim  that  children
                differ  from  adults merely in knowing less.
                I review the kinds of phenomena that  parti-
                cipants  in  the  debate take as relevant to
                deciding the issue.  Finally, I argue that a
                third  position,  in which the phenomenon of
                conceptual change is taken seriously, avoids
                the pitfalls of the original Piagetian posi-
                tion while allowing for its successes.

                I exemplify the third position by  sketching
                a recently completed case study of the emer-
                gence of biology as an independent domain of
                intuitive  theorizing in the first decade of
                life.  I will conclude by raising the  ques-
                tion  of  the  relation  between  conceptual
                change in childhood and conceptual change in
                the history of science.

------------------------------

Date: Thu, 15 Nov 84 18:41:04 cst
From: briggs@ut-sally.ARPA (Ted Briggs)
Subject: Seminar - Relational Interface, Process Representation

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

       ROSI: A UNIX Interface for the Discriminating User
                               by
                            Mark Roth
                     Srinivasan Sundararajan


                      noon  Friday Nov. 16
                            PAI 3.38

ROSI ( Relational Operating System Interface ) strives to provide
the  UNIX user an environment based on the relational data model.
Usually,  relational database theory deals only with relations in
1NF.   In  this  talk,  this  assumption is  relaxed  by allowing
sets-of-values to exist anywhere an atomic  value  could  before.
These  relations will be unnormalized or in non-first-normal-form
(non-1NF).  The need for non-1NF relations, a relational calculus
and  algebra  dealing  with  non-1NF relations, and some extended
algebra operators will be discussed.

The approach used in the design of ROSI was to model elements  of
the operating system environment as relations and to model system
commands as statements in a relational language. In adapting  the
relational data model to an operating system environment, we have
extended the model  and  tried  to  improve  existing  relational
languages.  The  extensions to the relational model  are designed
to allow a more natural representation of  elements  of  the  en-
vironment. The language extensions exploit the universal relation
model and utilize the graphical capabilities of  modern  worksta-
tions.

The goal of the project is to produce a user and  programmer  in-
terface to the operating system that :

        * is easier to use
        * is easier to learn
        * allows greater portability

as compared with existing operating system interfaces.

------------------------------

Date: 13 Nov 84 08:37:21 EST
From: Dennis Perzanowski <dennisp@NRL-AIC.ARPA>
Subject: Seminars - Partial Winter Schedule at NCARAI


           U.S. Navy Center for Applied Research
                 in Artificial Intelligence
           Naval Research Laboratory - Code 7510
                 Washington, DC  20375-5000

                   WINTER SEMINAR SERIES


Monday, 10:00 a.m.
3 December 1984
                Dr. Poohsan Tamura
                Westinghouse Research & Development Center
                Pittsburgh, PA
                 "Optical High Speed 3-D Digital Data Acquisition"

Monday, 10:00 a.m.
17 December 1984
                Dr. Terrence Sejnowski
                Department of Biophysics
                Johns Hopkins University
                Baltimore, MD
                 "The BOLTZMANN Multiprocessor"

Monday, 10:00 a.m.
14 January 1985
                Dr. Lance Miller
                IBM Thomas J. Watson Research Center
                Yorktown Heights, NY
                 "Bringing Intelligence into Word Processing:
                  The IBM EPISTLE System"

Monday, 10:00 a.m.
28 January 1985
                Dr. Larry Reeker
                Visiting Scientist at NCARAI
                from Tulane University, New Orleans, LA
                 "Programming for Artificial Intelligence:
                  LISP, Ada, PROLOG,   ... or Something Else?"


Meetings are held at 10:00 a.m. in the  Conference  Room  of
the   Navy   Center   for  Applied  Research  in  Artificial
Intelligence (Bldg. 256) located on Bolling Air Force  Base,
off  I-295, in the South East quadrant of Washington, DC.  A
map can be mailed for your convenience.
Coffee will be available starting at 9:45 a.m. for a nominal
fee.  Please do not arrive before this time.

IF YOU ARE INTERESTED IN ATTENDING A SEMINAR, PLEASE CONTACT
US  BEFORE NOON ON THE FRIDAY PRIOR TO THE SEMINAR SO THAT A
VISITOR'S PASS WILL BE AVAILABLE FOR YOU ON THE DAY  OF  THE
SEMINAR.   NON-U.S.  CITIZENS  MUST  CONTACT US AT LEAST TWO
WEEKS PRIOR TO A SCHEDULED SEMINAR.  If you  would  like  to
speak,  be  added  to  our  mailing list, or would like more
information,   contact   Dennis    Perzanowski.     ARPANET:
DENNISP@NRL-AIC or (202) 767-2686.

------------------------------

Date: Fri 9 Nov 84 17:21:21-PST
From: Jon Barwise <BARWISE@SU-CSLI.ARPA>
Subject: Course & Conference - Logic, language and computation meeting

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


           LOGIC, LANGUAGE AND COMPUTATION MEETINGS

The Association for Symbolic Logic (ASL) and the Center for the Study
of Language and Information (CSLI) are planning a summer school and a
meeting from July 8-20, 1985, at Stanford University.  The first week
(July 8-13) will consist of the CSLI Summer School, during which
courses on the following topics will be offered:

        Situation Semantics               Prof. Jon Barwise
        PROLOG                            Prof. Maarten van Emden
        Denotational Semantics            Prof. Gordon Plotkin
        Types and ML                      Dr. David MacQueen
        Complexity Theory                 Prof. Wolfgang Maass
        Abstract Data Types               Dr. Jose Meseguer
        The Theory of Algorithms          Prof. Yiannis Moschovakis
        Generalized Quantifiers           Dr. Lawrence Moss
        LISP                              Dr. Brian Smith
        Foundations of Intensional Logic  Prof. Richmond Thomason

(Enrollment in some courses using computers is limited.)

The second week (July 15-20) will consist of an ASL Meeting with
invited addresses, symposia, and sessions for contributed papers.  Of
the invited speakers, the following have already accepted:

        Prof. Peter Azcel                 Prof. David Kaplan
        Prof. Robert Constable            Prof. Kenneth Kunen
        Prof. Maarten van Emden           Prof. Per Martin-Lof
        Prof. Yuri Gurevich               Prof. John Reynolds (tentative)
        Prof. Anil Gupta (tentative)      Dr. Larry Wos
        Prof. Hans Kamp

Symposia:

Types in the Study of Computer and Natural Languages:

        Prof. R. Chierchia                Dr. David MacQueen
        Prof. Solomon Feferman            Prof. Barbara Partee

The Role of Logic in AI:

        Dr. David Israel                  Dr. Stanley Rosenschein
        Prof. John McCarthy


Possible Worlds:

        Prof. John Perry                  Prof. Robert Stalnaker


For further information or registration forms, write to Ingrid
Deiwiks, CSLI, Ventura Hall, Stanford, CA 94305, or call (415)
497-3084.  Room and board in a residence hall on campus are available,
and those interested should indicate their preference for single or
shared room, as well as the dates of their stay.  Since space is
limited, arrangements should be made early.  Some Graduate Student
Fellowships to cover cost of accomodation in the residence hall are
available.  Abstracts of contributed papers should be no longer than
300 words and submitted no later than April 1, 1985.  The program
committee consists of Jon Barwise, Solomon Feferman, David Israel and
William Marsh.

------------------------------

End of AIList Digest
********************

∂21-Nov-84  1306	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #158    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Nov 84  13:04:07 PST
Date: Wed 21 Nov 1984 11:27-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #158
To: AIList@SRI-AI


AIList Digest           Wednesday, 21 Nov 1984    Volume 2 : Issue 158

Today's Topics:
  LISP - Public-Domain LISP & Lisp Performance Tools,
  Expert Systems - Paradocs,
  Algorithms & Theorem Proving - Karmarkar' Linear Programming Algorithm,
  Seminars - Solving Problems in Equational Theories &
    The Grand Tour (in Pattern Recognition)
----------------------------------------------------------------------

Date: 19-Nov-84 10:11:58-PST
From: mkm@FORD-WDL1.ARPA
Subject: Public-Domain LISP?

Is there a public domain copy of a LISP interpreter running around out there?
If so, I would like to know where, how to get it, etc.

Thanks,

Mike McNair
Ford Aerospace

------------------------------

Date: Monday, 19 Nov 1984 09:39:20-PST
From: cashman%what.DEC@decwrl.ARPA
Subject: Lisp performance tools

        I am interested in pointers to any tools which have been developed for
measuring the performance of Lisp application programs (not measuring the
performance of Lisp systems themselves).

Paul Cashman (Cashman%what.DEC@DECWRL)

------------------------------

Date: Mon 19 Nov 84 18:25:33-PST
From: Joe Karnicky <KARNICKY@SU-SCORE.ARPA>
Subject: Paradocs Expert System

    Has anyone out there heard of/seen/used a software system called Paradocs?
It is marketed by a company that has undergone several name changes and is
currently known as Cogensys (out of Farmington Hills, Mich.).  The system
is described as a "judgement processing system" and is represented as being
able to combine inputs from several domain experts into a judgement base
which is then able to diagnose problems in the domain.  The name of the fellow
who created the system is Buzz Berk (spelling uncertain).
     I'd very much appreciate ***any*** information and/or opinions about the
value or performance of this system.
                                                 Sincerely,
                                                 Joe Karnicky
                                                 <KARNICKY@SCORE>

------------------------------

Date: 19 Nov 1984 1421-EST
From: Venkat Venkatasubramanian <VENKAT@CMU-CS-C.ARPA>
Subject: Karmarkar

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

There is a front page article in today's NY Times on Karmarkar and his
linear programming algorithm.

------------------------------

Date: 19 Nov 84  1204 PST
From: Martin Frost <ME@SU-AI.ARPA>
Subject: linear programming "breakthrough"

         [Excerpted from the Stanford bboard by Laws@SRI-AI.]

18 Nov 84
By JAMES GLEICK
c.1984 N.Y. Times News Service
    NEW YORK - A 28-year-old mathematician at AT&T Bell Laboratories has
made a startling theoretical breakthrough in the solving of [linear
programming] systems of equations. [...]
    The Bell Labs mathematician, Dr. Narendra Karmarkar, has devised a
radically new procedure. [...]
    The new Karmarkar approach exists so far only in rougher computer
code. Its full value will be impossible to judge until it has been
tested experimentally on a wide range of problems. But those who have
tested the early versions at Bell Labs say that it already appears
many times faster than the simplex method, and the advantage grows
rapidly with more complicated problems. [...]
    Karmarkar, the son and nephew of mathematicians, was born in
Gwalior, India, and grew up in Poona, near Bombay. He joined Bell
Labs last year after attending the California Institute of Technology
at Pasadena and getting his doctorate from the University of
California at Berkeley.
    News of his discovery has been spreading through the computer
science community in preprinted copies of Karmarkar's paper and in
informal seminars. His paper is to be formally published in the
journal Combinatorica next month and will be a central topic at the
yearly meeting of the Operations Research Society of America this
week in Dallas. [...]
    Mathematicians visualize such problems as complex geometric solids
with millions or billions of facets. Each corner of each facet
represents a possible solution. The task of the algorithm is to find
the best solution, say the corner at the top, without having to
calculate the location of every one.
    The simplex method, devised by the mathematician George B. Dantzig
in 1947, in effect runs along the edges of the solid, checking one
corner after another but always heading in the direction of the best
solution. In practice it usually manages to get there efficiently
enough for most problems, as long as the number of variables is no
more than 15,000 or 20,000.
    The Karmarkar algorithm, by contrast, takes a giant short cut,
plunging through the middle of the solid. After selecting an
arbitrary interior point, the algorithm warps the entire structure -
in essence, reshaping the problem - in a way designed to bring the
chosen point exactly into the center. The next step is to find a new
point in the direction of the best solution and to warp the structure
again, bringing the new point into the center.
    ''Unless you do this warping,'' Karmarkar said, ''the direction that
appears to give the best improvement each time is an illusion.''
    The repeated transformations, based on a technique known as
projective geometry, lead rapidly to the best answer. Computer
scientists who have examined the method describe it as ingenious.
    ''It is very new and surprising - it has more than one theoretical
novelty,'' said Laszlo Babai, visiting professor of computer science
at the University of Chicago. ''The real surprise is that the two
things came together, the theoretical breakthrough and the practical
applicability.''
    Dantzig, now professor of operations research and computer science
at Stanford University, cautioned that it was too early to assess
fully the usefulness of the Karmarkar method. ''We have to separate
theory from practice,'' he said. ''It is a remarkable theoretical
result and it has a lot of promise in it, but the results are not all
in yet.''
    Many mathematicians interested in the theory of computer science
have long been dissatisfied with the simplex method, despite its
enormous practical success. This is because the program performs
poorly on problems designed specificaly to test its weaknesses,
so-called worst possible case problems. [...]
    But fortunately for computer science, the worst-case problems almost
never arise in the real world. ''You had to work hard to produce
these examples,'' Graham said. And the simplex method performs far
better on average than its worst-case limit would suggest.
    Five years ago, a group of Soviet mathematicians devised a new
algorithm, the ellipsoid method, that handled those worst-case
problems far better than the simplex method. It was a theoretical
advance - but the ellipsoid had little practical significance because
its average performance was not much better than its worst-case
performance.
    The Soviet discovery, however, stimulated a burst of activity on the
problem and led to Karmarkar's breakthrough. The new algorithm does
far better in the worst case, and the improvement appears to apply as
well to the kinds of problems of most interest to industry.
    ''For a long time the mind-set that the simplex method was the way
to do things may have blocked other methods from being tested,'' said
Dr. Richard Karp, professor of computer science at the University of
California at Berkeley. ''It comes as a big surprise that what might
have been just a curiosity, like the ellipsoid, turns out to have
such practical importance.''

------------------------------

Date: Tue 20 Nov 84 14:15:01-PST
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Re: new linear programming algorithm

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

     This may have implications for automatic theorem proving.  Since
Dick Karp is working on it, they will probably be explored.  We may get
some really high performance verification techniques someday.

------------------------------

Date: Tue, 20 Nov 84 16:17:41 pst
From: Vaughan Pratt <pratt@Navajo>
Subject: new linear programming algorithm and automatic theorem
         proving.

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

As applied to program verification, the typical linear programming problems
that arise are almost all of the form, find any integer solution to a set
of inequations of the form x+c<y where x and y are variables and c is an
integer constant.  (Different inequations are allowed different choices of
the two variables, i.e. there are more than two variables in the system as
a whole.)  There is a simple algorithm for solving these (in either integers
or reals) having worst case O(n**3) (essentially Floyd's algorithm for all
shortest paths).  The sets tend to be sparse, which can be taken advantage
of to get better than n**3 performance.  The implementation is simple and the
constant factor is small.

The reason this form crops up is that it is an alternative representation
for the inequational theory of successor and predecessor, which in turn
crops up since most arithmetic occurring in programs consists of subscript
incrementing and decrementing and checking against bounds.  Programs whose
arithmetic goes beyond this theory also tend to go beyond the theory of + and
- by having * and / as well, i.e. the fraction of programs covered by
linear programming but not by the above trivial fragment of it is not that
large.

-v

------------------------------

Date: Tue, 20 Nov 84 17:26:48 pst
From: Moshe Vardi <vardi@diablo>
Subject: New Algorith for Linear Programming

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

A preliminary report appeared in the proceedings of the last ACM Symp. on
Theory of Computing. It is a provably polynomial time algorithm, which unlike
Khachian's algorithm is a practical one. There are doubts among the experts
whether the algorithm is as revolutionary as the PR people say it is.

Moshe

------------------------------

Date: Sat 17 Nov 84 17:58:32-PST
From: Ole Lehrmann Madsen <MADSEN@SU-CSLI.ARPA>
Subject: Seminar - Solving Problems in Equational Theories

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


CENTER FOR THE STUDY OF LANGUAGE AND INFORMATION
                 AREA C MEETING

Topic:     REVE: a system for solving problems in equational theories,
                 based on term rewriting techniques.
Speaker:   Jean-Pierre Jouannaud, Professor at University of NANCY, FRANCE,
           on leave at SRI-Internatinal and CSLI.
Time:      1:30-3:30
Date:      Wednesday, Nov. 21
Place:     Ventura Seminar Room

Equational Logic has been adopted by mathematicians for a very long time and by
computer scientists recently.  Specifications in OBJ2, an "object-oriented"
language designed and implemented at SRI-International, uses equations to
express relations between objects.  To express computations in this logic,
equations are used one way, e.g. as rules.  To make proofs with rules in this
logic requires the so-called "confluence" property, which expresses that the
result of a computation is unique, no matter the order the rules are applied.
Proofs and computations are therefore integrated in a very simple framework.
When a set of rules does not have the confluence property, it is augmented by
new rules, using the so-called Knuth and Bendix completion algorithm, until the
property becomes satisfied.  This algorithm requires the set of rules to have
the termination property, i.e., an expression cannot be rewritten forever.  It
has been proved that this algorithm allows to perform inductive proof without
invoking explicitly an induction principle and solve equations (unification) in
the corresponding equational theory as well.

REVE1, developped by Pierre Lescanne during a leave at MIT, implements all
these concepts, including automated proofs for termination.

REVE2, developped by Randy Forgaard at MIT, provided REVE with a very
sophisticated user interface, including an undo command.

REVE3, developped by Claude and Helene Kirchner in NANCY, includes new powerful
features, mainly mixted sets of rules and equations for handling theories
including permutative axioms.

All versions are developped in CLU and run on VAX under UNIX-Berkeley.

------------------------------

Date: Tue 20 Nov 84 22:59:35-PST
From: Art Owen <OWEN@SU-SCORE.ARPA>
Subject: Seminar - The Grand Tour

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Laboratory for Computational Statistics Seminar

Time:   3:15pm Wednesday November 21st
Place:  Sequoia Hall 114
Cookies:  at 3:00  in the Lounge

Title:          Hopes for the Grand Tour

by                      Daniel Asimov
                        Dept. of Computer Science
                        U.C. Berkeley

The grand tour is a technique for examining two dimensional projections
of higher dimensional objects.  The tour essentially picks a trajectory
through the space of possible projections, while a data analyst watches
the corresponding 'movie' on a graphics terminal.  The objective
is to as quickly as possible pass near to most of the
possible projections.  It is a tool for finding projections that are
informative.

The talk will discuss the current state of grand tour
research, identifying desirable properties
that a tour might have, indicating which such properties have been
achieved and directions for future research.

That's 3:00, 21 Nov 84, Sequoia Hall 114

------------------------------

End of AIList Digest
********************

∂21-Nov-84  2341	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #159    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Nov 84  23:41:25 PST
Date: Wed 21 Nov 1984 21:44-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #159
To: AIList@SRI-AI


AIList Digest           Thursday, 22 Nov 1984     Volume 2 : Issue 159

Today's Topics:
  Algorithms - Interlisp-D "malgorithm?",
  Programming Style - IBM Compiler Message,
  Machine Translation - Simplistic Beginnings,
  Books - Hackers: Heroes of the Computer Revolution,
  Research Styles - B.F. Skinner,
  Reasoning about Structure and Function - SIGART Special Issue,
  Conference - Hardware Description Languages
----------------------------------------------------------------------

Date: Sun 18 Nov 84 13:55:58-PST
From: Jay Ferguson <FERGUSON@SUMEX-AIM.ARPA>
Subject: Interlisp-D "malgorithm?"


Another point on this classic example of a true malgorithm is that
there is a lack of understanding of implementation detail.  The
CLISP feature of Interlisp is translated into a MAP function or a
PROG depending upon the structure.  Each time you call a FOR statement
interpetively the translation occurs.  When you compile the FOR
statement you will see large gains in efficiency.

I ran several test of both LDIFF, the initial FOR, and JonL's FOR with
the following results:

                   interpreted          compiled

LDIFF              .00125 secs          .00125 secs

Todd - FOR         .02125 secs          .00444 secs

JonL - FOR         .02114 secs          .00115 secs


These were run under INTERLISP-10 on a DEC-2060 with a 26 element list,
taking the first 9 elements.  Each test was run 100 times.  LDIFF was
not actually compiled because it was a normal function call.

jay

------------------------------

Date: Sun, 18 Nov 84 15:50:57 PST
From: Steve Crocker <crocker@AEROSPACE>
Subject: IBM compiler message rebuttal

At the risk of being misunderstood as an apologist for IBM's ultra prosaic
programming systems, I feel Earl Boebert's Nov 16 item on IBM's Fortran
compiler error message, viz. "COLUMN cc OF CARD nnnn CONTAINS A 12-4 PUNCH
MINUS SIGN INSTEAD OF AN 11 PUNCH MINUS SIGN.  CORRECT AND RESUBMIT.", is
taken out of context and misrepresents the situation.

First, a slight diversion.  I believe a 12-4 code is a D, and Earl probably
meant the 11-8-4 code, although my memory is a bit rusty and I surely have
not saved my old IBM BCD crib sheets.

The real issue is there had been two legal codes for minus, 11-8-4 and 11.
A decision had been made to phase out the 11-8-4 so it could be reassigned
to another symbol, and it eventually became the apostrophe, I believe.

Conversion proceeded in phases.  At the end of the conversion, the 11-8-4
code would always be treated as an apostrophe and receive no more special
attention if it were detected in an inappropriate position than any other
character would.  For example, "A = B'C" and "A = B$C" would get the same
treatment, and inhibit completion of the compilation.  (Admittedly, other
strategies for dealing with errors are possible, e.g. the DWIM system in
Interlisp, but that would mean a COMPLETE overhaul of the Fortran compiler,
and Fortran wasn't designed for either heuristic error correction or
interactive repair.)

To get to the point where 11-8-4 was freed up from its interpretation as a
minus sign, users were informed of the change and "encouraged" to amend
their programs.  The messages during the initial period were just warning
messages.  Later they were hard errors, as Earl related.  One might object
to this, but it's not simple to see what else to do.  If the 11-8-4 were to
take on a new meaning and still be accepted as a minus sign in all contexts
that minus signs are legal, both ambiguity and outright misunderstandings
would be propagated.  Despite the apparent inflexibility of the compiler, I
doubt this kind of error message caused any large disruption in programmer
productivity.

The problem was not unique to IBM's character set, of course.  The meaning
of the ASCII code for "←" was changed a few years ago.  It used to mean a
left arrow and some languages used it for assignment; now it means an
underscore and is used within identifiers.  This conversion was not without
some pain...

More seriously, the problem of catching all the dependencies of some change
to an established interface remains a challenge.  This may be a more frutiful
topic for discussion than malgorithms.

------------------------------

Date: Mon, 19 Nov 84 13:54 CST
From: Boebert@HI-MULTICS.ARPA
Subject: re: IBM compiler message rebuttal

Just when you think of a good cheap shot, somebody goes and makes it
sound like it was a reasonable thing to do...in any event, we were
undergrads and very much on the Algol side of the Algol/FORTRAN dispute,
and we thought the message a wonderful example of IBM mindlessness.
Maybe they should have appended THIS FATAL ERROR BROUGHT TO YOU IN THE
INTERESTS OF THE GREATER GOOD.

------------------------------

Date: Tue 20 Nov 84 22:47:28-EST
From: Michael Rubin <RUBIN@COLUMBIA-20.ARPA>
Subject: Re: computers speaking German

I've seen a paper from the very early sixties that described a French
preprocessor for FORTRAN -- it converted ALLER to GOTO, FAIRE to DO,
etcetera....  The paper claimed this was a first step toward machine
translation (of natural language).

------------------------------

Date: Mon 19 Nov 84 05:55:27-CST
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Re: book"Hackers: heroes of the computer revolution"

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

   I picked up Steven Levy's "Hackers" today and have gotten through Part 1:
"True Hackers--Cambridge: the Fifties and Sixties".  All in all quite
enjoyable and well worth the money, though I did have to grit my teeth when
reading about the "TICO" text editor and the "MULTIX" and "TENNIX"
operating systems.  Sigh.  The book comes mostly from over a hundred
personal interviews conducted in 1982-83.  Levy seems to have done a
careful job of documenting the written sources and of compiling an index.
Those who were interviewed will have to be the ones to say how faithfully
their perspective was communicated.  Most of the stories in Part 1 have
become part of standard "hacker folklore" which has been passed from mouth
to mouth and keyboard to keyboard over the last 25 years.  It's nice to
have them all collected in one place now.

   I certainly wouldn't rate Steven Levy's writing in the same class as Tom
Wolfe's, but I must admit that the the way the image of Cambridge is
painted as the birthplace of hacking was strikingly reminiscent of how
Wolfe built the image of the high desert in California as the birthplace of
the Right Stuff.  Levy even sprinkles references to "The Right Thing"
throughout the text.  (As we all know, Wolfe came up with his title after
seeing this term in the Jargon file. :-))

   I cheated and temporarily skipped over Part 2 ("Hardware Hackers") and
Part 3 ("Game Hackers") to the Epilogue--"The Last of the True Hackers".
The material covered here (e.g. the birth of Symbolics & LMI) is more
contemporary and thus familiar to many of us.  It is, sadly, pretty much
on the mark.

I too would be interested in hearing other opinions of this book
(especially from any of those interviewed.)

Clive

------------------------------

Date: 19 Nov 84 09:48 PST
From: JonL.pa@XEROX.ARPA
Subject: B.F. Skinner: A Man for All Reasonings

Shrager's conveyance of the quote about Skinner being an "expert systems
researcher" highlights a fundamental split in the AI community.

First, let me say I don't regard expert systems as a panacea -- at worst
they could be viewed as a technological spin-off of 20 years of AI
research.  Contrast this with the view taken by Skinner and his
disciples about SR being a fully adequate model of psychology; the
appearance of his book Verbal Behaviour is a desperate attempt to shore
up this claim.

On the other hand, a certain faction of AI is also trying to find a
fully adequate model of human cognitive capabilities (I would place
Minsky as the arch-defender of this "faith" -- the mind-as-meat-computer
camp); possibly *some* AI people would think that a brute force approach
along the lines of expert systems would be an interesting model, but I
don't personally know any such.  Another faction is less concerned with
mimicing the human structures and more concerned with the "artificial"
aspects of intelligence; I tend, now, to think of John McCarthy as the
prototype of this camp (see his article in Psychology Today earlier this
year -- perhaps April? -- and don't be put off by the fact that it
appears in, glaaag, Psychology Today).

The second approach is *not* to be confused with expert systems.
Although, one could imagine why "expert systems" would receive a more
favorable review from the latter camp than from the former.

I was present at MIT in late 1971 when the "MathLab" group was "read
out" of the AI community (the "MathLab" group at MIT quickly then became
MACSYMA).  Although MacSyma was certainly among the first of Expert
Systems with a major impact, it wasn't "AI" by the prevailing standards;
perhaps more like engineering, but not "AI".  What must be emphasized,
however, is that no one, at any time, thought of MacSyma as even a
partial model of human cognition.


If Skinner were coming of age now, with the same mind set, and were
indeed an expert systems researcher -- don't you think he'd have a more
"ambitious" goal?

-- JonL White --

------------------------------

Date: 18 Nov 1984 17:32-EST
From: milne <milne@wpafb-afita>
Subject: SIGART on Reasoning about Structure and Function


Special Issue of SIGART News on Reasoning about Structure and Function.

  We plan to edit a special issue of SIGART News devoted to representing,
and reasoning about, structure, behavior and function of devices and
systems.   This has recently become a topic of increasing importance
in giving expert systems capabilities for causal reasoning to support
diagnostic and other tasks.  Work in this area has been in the domains
of simple machines, electronic circuits, mechanical systems and medicine.

  Our aim is to cover the spectrum of work that is going on in the U. S
and other countries in this general area.  We expect that the SIGART News
special issue will be followed by a special issue of some appropriate
journal containing fuller version of selected papers from the former.

  Submissions are invited from researchers summarizing their approach,
results, problems and plans.   The submissions should be under
5 type-written pages, and should be sent to Prof. Rob Milne at the address
below.  The deadline for submissions is 15 January 1985.

Rob Milne                               B. Chandrasekaran
Department of Electrical Enginnering    Department of Computer & Information
AFIT/ENG                                                Science
A.F. Institute of Technology            The Ohio State University
WPAFB, OH 45433                         Columbus, OH 43210
513-255-3576                                    614-422-0923
milne@wpafb-afita

------------------------------

Date: Saturday, 17 November 1984 20:35:17 EST
From: Mario.Barbacci@cmu-cs-spice.arpa
Subject: Conference - Hardware Description Languages


                                CALL FOR PAPERS

                        7TH INTERNATIONAL SYMPOSIUM ON
                    COMPUTER HARDWARE DESCRIPTION LANGUAGES
                            AND THEIR APPLICATIONS
                                    CHDL-85

                              AUGUST 29-31, 1985
                              KEIDANREN BUILDING
                                 TOKYO, JAPAN

Sponsored by the International Federation for Information Processing (IFIP) and
the Information Processing Society of Japan (IPSJ), organized by IFIP TC-10 and
IFIP WG 10.2, in cooperation with IEEE-CS, ACM, GI, and NTG.

The theme of the symposium is:

                    TOOL, METHOD, AND LANGUAGE INTEGRATION

The  Symposium  focuses  on  the design process as a whole. The objective is to
cover the various aspects of (computer-supported) specification,  verification,
modelling,  evaluation, and design of computer systems based on suitable design
languages. Integration can be considered from specification  to  implementation
as well as in terms of language and tool integration at a given level.

Topic areas are:

>From Specification to Implementation of Digital Systems:
    methodological aspects            integrating levels of description
    formal verification and           performance and reliability
            correctness                       evaluation
    test generation from CHDL         synthesis
            descriptions

Computer System/Hardware Description Languages:
    formal specification languages    languages and technology
    multiple representation of        language support for verification,
            design objects                    performance, and reliability

Tool Integration:
    design environments               expert systems for system design
    data structures for integration   integration of tools for testing,
           between levels and tools            verification, and simulation

Acceptance and Experience:
    reality in industry               acceptance problems of new methods,
    integration with CAD/CAM                  languages and tools

Five  (5)  copies  of  the  full length manuscript in English, not exceeding 20
double-spaced typewritten pages, should be sent  to  the  Program  Chairman  to
arrive no later than December 15, 1984.

Notification   of   acceptance  is  planned  for  March  15,  1985.  The  final
camera-ready version of accepted papers is due on May 15, 1985.

Because the symposium is held immediately after the VLSI 85 conference  at  the
same location, Program Committees of both conferences may transfer papers which
fit better the topics of the other conference.

General Chairman:                     Program Chairman:

Professor Tohru Moto-oka              Dr. Cees Jan Koomen
Department of Electrical Engineering  Philips International
University of Tokyo                   Product Development Coordination
Hongo, 7 chome                        VO-1, P.O. Box 218
Bunkyo-ku                             5600 MD Eindhoven,
Tokyo, Japan                          The Netherlands
telephone (212) 2111 ext. 6652        telephone (31) (40) 884962
                                      ArpaNet: Philips@sri-csl

Local Committee Chairman:             IFIP WG 10.2 Chairman:

Dr.Takao Uehara                       Dr. Mario R. Barbacci
Tools and Methodology Section         Department of Computer Science
Software Laboratory                   Carnegie-Mellon University
Fujitsu Laboratories Ltd.             Pittsburgh
1015 Kamikodanaka Nakahara-ku         Pennsylvania 15213
Kawasaki 211, Japan                   USA
telephone (81) (44) 777 1111 X6155    telephone (412) 578-2578
telex 3842 122

Local Committee:

H. Ando  (publicity),  Y. Ikemoto  (local  arrangements), O. Karatsu (finance),
T. Uehara (Chairman)

Program Committee:

M. Barbacci (USA), D. Borrione (France), S. Crocker (USA), J. Darringer  (USA),
S. Dasgupta    (USA),   R. Hartenstein   (FRG),   E. Hoerbst   (FRG),   J. Jess
(Netherlands),  C.J.   Koomen   (Netherlands,   Chairman),   F. Rammig   (FRG),
W. Sherwood   (USA),  T. Sudo  (Japan),  T. Uehara  (Japan),  M. Vernon  (USA),
A. Yamada (Japan)

------------------------------

End of AIList Digest
********************

∂24-Nov-84  1543	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #160    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Nov 84  15:41:51 PST
Date: Sat 24 Nov 1984 13:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #160
To: AIList@SRI-AI


AIList Digest           Saturday, 24 Nov 1984     Volume 2 : Issue 160

Today's Topics:
  Plan Recognition
  Hardware - Uses of Optical Disks,
  Linguistics - Language Simplification & Natural Languages,
  Seminars - Intention in Text Interpretation (Berkeley) &
    Cooperative Distributed Problem Solving (CMU) &
    A Shape Recognition Illusion (CMU)
----------------------------------------------------------------------

Date: 21 Nov 1984 15:55:02-EST
From: kushnier@NADC
Subject: Plan Recognition


                               WANTED

We are interested in any information, papers, reports, or titles of same
dealing with AI PLAN RECOGNITION that can be supplied to the government
at no cost (they made me say that!). We are presently involved in an
R&D effort requiring such information.

                                          Thanks in advance,

                                          Ron Kushnier
                                          Code 5023
                                          NAVAIRDEVCEN
                                          Warminster Pa. 18974

kushnier@nadc.arpa

------------------------------

Date: 19 Nov 84 16:55:09 EST
From: DIETZ@RUTGERS.ARPA
Subject: Are books obsolete?

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

Sony has recently introduced a portable compact optical disk player.
I hear they intend to market it as a microcomputer peripheral for
$300.  I'm not sure what its capacity will be, so I'll estimate it at
50 megabytes per side.  That's 25000 ascii coded 8 1/2x11 pages, or
1000 compressed page images, per side.  Disks cost about $10, for a
cost per word orders of magnitude less than books.

Here's an excellent opportunity for those concerned with the social
impact of computer technology to demonstrate their wisdom.  What will
the effect be of such inexpensive read-only storage media?  How will
this technology affect the popularity of home computers?  What
features should a home computer have to fully exploit this technology?
How should text be stored on the disks?  What difference would
magneto-optical writeable/erasble disks make?  How will this
technology affect 
Date: Tue, 20 Nov 84 22:26:16 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Re: Language Simplification,  V2 #157

On Gillam's comments on simplification:

1. In the South U.S., there is a raising of the vowels. "Pen" becomes
"pin." This results in homophony between the words "pen" and "pin."
Thus, in these dialects, the word "pin" becomes something like "peeun,"
with the vowel raised even more. The lesson is that an ostensible sim-
plification complicates the system further by requiring a dif-
ferentiation between certain phonological forms. This is an instance of
supposed regularity causing complication.

------------------------------

Date: Sun, 18 Nov 84 17:45:34 PST
From: "Dr. Michael G. Dyer" <dyer@UCLA-LOCUS.ARPA>
Subject: what language 'is' (?)


re:  what natural language 'is'

While it's fun to make up criteria and then use those criteria to judge
one natural language as 'superior' to another, or decide that a given NL
has 'degenerated' etc, I don't really see this approach as leading
anywhere (except, perhaps, for 'phylogenetic' studies of language
'speciation', just as pot shards are examined in archeology for cultural
contacts...  We could also spend our time deciding which culture is
'better' by various criteria, e.g. more weapons, less TV, etc).

It's also convenient to talk about natural language as if it's something
"on its own".  However, I view this attitude as scientifically
unhealthy, since it leads to an overemphasis on linguistic structure.
Surely the interesting questions about NL concern those cognitive
processes involved in getting from NL to thoughts in memory and back out
again to language.  These processes involve forming models of what the
speaker/listener knows, and applying world knowledge and context.  NL
structure plays only a small part in these overall processes, since the
main ones involve knowledge application, memory interactions, memory
search, inference, etc.

e.g. consider the following story:

     "John wanted to see a movie.  He hopped on his bike
     and went to the drugstore and bought a paper.
     Then he went home and called the theater to get the
     exact time."

now we could have said this any number of ways, eg.

     "John paid for a paper at the drugstore.  He'd gotten
     there on his bike.  Later, at home,  he used the number
     in the paper to call the theater,  since he wanted
     to see a movie and needed to know the exact time."

The reason we can handle such diverse versions -- in which the goals and
actions appear in different order -- is that we can RECONSTRUCT John's
complete plan for enjoying a movie from our general knowledge of what's
involved in selecting and getting to a movie.  It looks something like
this:

     enjoy movie
          need to know what's playing
           --> read newspaper (ie one way to find out)
                  need newspaper
                  --> get newspaper
                        possess newspaper
                           need $  to buy it (ie one way to get it)
                        need to be where it's sold
                           need way to get there
                             --> use bike (ie one way to travel)
          need to know time
            --> call theater (ie one way to find out)
                   need to know phone number
                     --> get # out of newspaper

          need to physically watch it
            need to be there
              --> drive there (ie one way to get there)
            need to know how to get there
               etc

We use our pre-existing knowledge (e.g.  of how people get to a movie of
their choice) to help us understand text about such things.  Once we've
formed a conceptual model of the planning involved (from our knowledge
of constraints and enablement on plans and goals), then we can put the
story 'in the right order' in our minds.

In fact, the notion of goals, plans, and enablements should be universal
among all humans (the closest thing to a 'universal grammar', for people
who insist on talking about things in terms of 'grammars').  Given this
fact, EVERY natural language should allow sparse and somewhat
mixed-order renditions of plan-related stories.  Is this a feature,
then, of one or more NATURAL LANGUAGEs, or is it really a feature of
general INTELLIGENCE -- i.e. planning, inference etc.

Clearly the interesting problems here are:  how to represent goal/plan
knowledge, how this knowledge is referred to in a given language, and
how these knowledge sources interact to instantiate a representation of
what the reader knows after reading about John's movie trip.

(Of course, other types of text will involve other kinds of conceptual
constructs -- e.g. editorial text involves reasoning and beliefs).

Wittgenstein expressed the insight -- i.e. that natural languages are
fundamentally different from formal languages -- in terms of his notion
of "language games".  He argued that speakers are like the players of a
game, and to the extent that the players know the rules, they can do
all sorts of communication 'tricks' (since they know another player
can use HIS knowledge of the "game" to extract the most appropriate
meaning from an utterance, gesture, text...).  As a result, Wittgenstein
felt it was quite misguided to argue that formal languages are 'better'
because they're unambiguous.

Now this issue is reappearing in a slightly different guise as a number
of ancient natural(?) languages are offered as 'the answer' to our
representational problems, based on the claim that they are unambiguous.
Two favorites currently seem to be sastric sanskrit and a Bolivian
language called "Aymara".

(Quote from news article in LA Times, Nov.  7, '84 p 12:  "...  wisemen
constructed the language [Aymara] from scratch, by logical, premeditated
design, as early as 4,000 years ago")

I suspect ancient and exotic languages are being chosen since fewer
people know enough about them to dispute any claims made.  Of course
this isn't done on purpose:  it's simply that the better known NLs that
get proposed are more quickly discarded since more people will know, or
can find, counter-examples for each claim.

By the way, the kinds of discussions we have here at UCLA on NL are very
different from those I see on AIList.  Instead of arguing about what
language 'is' (i.e.  the definitional approach to science that Minksy and
others have  criticized on earlier AILists), we try to represent ideas
(e.g.  "Religion is the opiate of the masses", "self-fulfilling
prophecy", "John congratulated Mary", etc) in terms of abstract
conceptual data structures, where the representation chosen is judged in
terms of its usefulness for inference, parsing, memory search, etc.
Discussions include how a conceptual parser would take such text and map
it into such constructs; how knowledge of these constructs and
inferential processes can aid in the parsing process; how the resulting
instantiated structures would be searched during:  Q/A, advice
giving, paraphrasing, summarization, translation, and so on.

It's fun to BS about NL, but I wouldn't want my students to think that
what appears on AIList (with a few exceptions) re: NL is the way NL
research should be conducted or specifies what the important research
issues in NL are.

I hope I haven't insulted anyone.  (If I have, then you know who you
are!)  I'm guessing that most readers out there actually agree with me.

------------------------------

Date: Wed, 21 Nov 84 14:02:39 pst
From: chertok%ucbcogsci@Berkeley (Paula Chertok)
Subject: Seminar - Intention in Text Interpretation (Berkeley)

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, November 27, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Walter Michaels and  Steven  Knapp,  English
                Department, UC Berkeley

TITLE:          ``Against Theory''

ABSTRACT:       A discussion of the role of intention in the
                interpretation   of  text.   We  argue  that
                linguistic meaning  is  always  intentional;
                that   linguistic   forms  have  no  meaning
                independent of  authorial  intention;   that
                interpretative disagreements are necessarily
                disagreements about what a particular author
                intended  to  say;  and that recognizing the
                inescapability of intention has fatal conse-
                quences  for  all  attempts  to  construct a
                theory of interpretation.

------------------------------

Date: 21 Nov 84 15:24:46 EST
From: Steven.Shafer@CMU-CS-IUS
Subject: Seminar - Cooperative Distributed Problem Solving (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Victor Lesser, from U. Mass., is coming to CMU on Tuesday to present
the AI Seminar.  He will be speaking about AI techniques for use on
distributed systems.  3:30 pm on Tuesday, November 27, in WeH 5409.


COOPERATIVE DISTRIBUTED PROBLEM SOLVING

   This research topic is part of a new research area that has
recently emerged in AI, called Distributed AI.  This new area
combines research issues in distributed processing and AI by
focusing on the development of distributed networks of
semi-autonomous nodes that cooperate interactively to solve a
single task.
   Our particular emphasis in this general research area has
been on how to design such problem-solving networks so that
they can function effectively even though processing nodes have
inconsistent and incomplete views of the data bases necessary for
their computations.  An example of the type of application that
this approach is suitable for is a distributed sensor network.
   This lecture will discuss our basic approach called Functionally-
Accurate Cooperative Problem-Solving, the need for sophisticated
network-wide control and its relationship to local node control, and
[end of message -- KIL]

------------------------------

Date: 21 November 1984 1639-EST
From: Cathy Hill@CMU-CS-A
Subject: Seminar - A Shape Recognition Illusion (CMU)

Speaker:  Geoff Hinton and Kevin Lang (CMU)
Title:    A Strange property of shape recognition networks.

Date:     November 27, l984
Time:     12 noon - 1:30 p.m.
Place:    Adamson Wing in Baker Hall

Abstract: We shall describe a parallel network that is capable of
          recognizing simple shapes in any orientation or position
          and we will show that networks of this type are liable to
          make a strange kind of error when presented with several
          shapes that are followed by a backward mask.  The error
          involves perceiving one shape in the position of another.
          Anne Treisman has shown that people make errors of just
          this kind.

------------------------------

End of AIList Digest
********************

∂25-Nov-84  1736	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #161    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 Nov 84  17:34:47 PST
Date: Sun 25 Nov 1984 15:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #161
To: AIList@SRI-AI


AIList Digest            Sunday, 25 Nov 1984      Volume 2 : Issue 161

Today's Topics:
  Benchmarking Reading Comprehension
  Reviews - AI Abstracts & IEEE Computer & High Technology & Learnability,
  Humor - Brain Structure,
  Algorithms - Many-Body Problem & Macro Cacheing & Linear Programming,
  Seminar - Set Theoretic Problem Translation (CMU)
----------------------------------------------------------------------

Date: Sun 25 Nov 84 01:50:44-EST
From: Wayne McGuire <MDC.WAYNE%MIT-OZ@MIT-MC.ARPA>
Subject: Benchmarking Reading Comprehension

     Does anyone know if any objective standards or tests have been
devised for rating or benchmarking the power of natural language
understanding systems?

     In the world of chess exists a standard international system for
rating players which can be applied to chess-playing programs. I think
it would be useful to devise a similar system for natural language
understanding software. Such a benchmarking scheme would make it
possible to track the rate of progress in the most fundamental branch
of computational linguistics, and to compare the performance of
competing systems. The National Bureau of Standards might be an
appropriate organization to oversee a project of this kind.

     Perhaps such a benchmarking system could be based on the reading
comprehension sections of the SAT or GRE exams. A GRE-style multiple
choice test for natural language understanding would avert the problem
of wrongly jumbling the capacity to understand--to recognize
propositions, reason, and draw inferences--with the ability of a
program to answer questions with well-formed discourse, a domain of
skill which is really quite separate from pure comprehension. It would
be desirable to establish standard tests for every major language in
the world.

     Is there an existing natural language understanding system in the
world that can read even at the level of a third grader? Probably not.

     To that researcher or research team in the world which first
designs (no doubt decades from now) a program which consistently
scores at least 700 on the reading comprehension sections of
standardized tests like the SAT or GRE could be offered, perhaps, a
major cash prize.

------------------------------

Date: Sat 24 Nov 84 15:27:29-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI Abstracts

Two ads for AI abstracts and indices have recently crossed my desk:

Scientific Datalink is offering a four-volume index to the AI
research reports that they offer in microfiche (one volume of
authors and titles, one of subjects, and two of abstracts).
The price is $375 now, $495 after publication.  Individual
volumes are not offered separately in this ad.  For more
information, write to Ms. Chia Reinhard, Scientific Datalink,
850 Third Avenue, New York, NY 10022.  (212) 838-7200.

ECI/Intelligence is offering a new journal, Artificial Intelligence
Abstracts, at $295 for 1985.  (The mockup of the first issue is dated
October 1984, but the text consists of such gems as "Ut einim ad
minim veniam, quis nostrud exercitation laboris nisi ut aliquip ex
ea commodo consequet.")  The journal offers to keep you up to date on
market research, hardware and software developments, expert systems,
financial planning, and legislative activities.  There is a similar
journal for CAD/CAM.  The AI advisory board includes Harry Barrow,
Michael Brady, Pamela McCorduck, and David Shaw.

ECI/Intelligence also offers a full-text document order service
from their microfiche collection.  For more info, write to them
at 48 West 38 Street, New York, NY 10018.

                                        -- Ken Laws

------------------------------

Date: Sat 24 Nov 84 14:51:17-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IEEE Computer Articles

AI is mentioned a few times in the November issue of IEEE Computer.

On p. 114, there are excerpts from the keynote Compcon speech by
Robert C. Miller, a senior vice president of Data General.  He is
touting expert systems, and estimates that overall sales of AI-related
products will increase from $100-150 million this year to $2.5
billion by the end of the decade.

P. 117 has a very short mention of NCAI and the coming IJCAI.

P. 133 has L. Elliot's review of Learning and Teaching with Computers,
by Tim O'Shea and John Self.  The book is evidently about half AI
(Logo, MYCIN, knowledge engineering, and epistemology) and half
computer-assisted learning (teaching styles, learning styles,
tutoring strategies).

The rest of the magazine is mostly about Teradata's database machines,
the PICT graphical programming language, workstations in local area
networks, and some overviews of software engineering at NEC and GTE.

                                        -- Ken Laws

------------------------------

Date: Sat 24 Nov 84 15:02:27-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: High Technology Articles

The December issue of High Technology has some interesting articles
for computer folk.  On p. 9 there's a short item about Logicware's
(Hungarian-developed) MPROLOG, a "modular" PROLOG for IBM PCs and
mainframes, 68000-based micros, VAXen, and other hosts.  Other
articles review the best of current microcomputer equipment (chiefly
PC-AT and Macintosh), survey the field of visual flight and driving
simulators, and present an excellent introduction to database
structures and machines (esp. relational databases).

                                        -- Ken Laws

------------------------------

Date: Sun 25 Nov 84 15:25:50-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: CACM Article on Learning

The November issue of CACM includes "A Theory of the Learnable", by
L.G. Valiant of Harvard.  I am not competent to evaluate the article,
which is based on theorems in computational complexity, but I can
characterize it as follows:

The author is considering the class of concepts in propositional logic
that can be learned in a polynomial number of steps from a source of
positive examples (produced as required in accordance with a probability
distribution) and an oracle that can classify an arbitrary Boolean vector
as a positive or negative exemplar.  The classes that are found to be
learnable are  1) conjunctive normal form expressions with a bounded
number of literals in each clause (no oracle required),  2) monotone
disjunctive normal form expressions, and  3) arbitrary expressions
in which each variable occurs just once (no examples required, but
the oracle must be especially capable).  The method of learning used
is such that the learned concept may occasionally reject true exemplars
but will not accept false ones.

The closing remarks contain this interesting quote:

  An important aspect of our approach, if cast in its greatest
  generality, is that we require the recognition algorithm of the
  teacher and learner to agree on an overwhelming fraction of only
  the natural inputs.  Their behavior on unnatural inputs is
  irrelevant, and hence descriptions of all possible worlds are not
  necessary.  If followed to its conclusion, this idea has considerable
  philosophical implications:  A learnable concept is nothing more
  than a short program that distinguishes some natural inputs from
  some others.  If such a concept is passed on among a population
  in a distributed manner, substantial variations in meaning may arise.
  More importantly, what consensus there is will only be meaningful
  for natural inputs.  The behavior of an individual's program for
  unnatural inputs has no relevance.  Hence thought experiments and
  logical arguments involving unnatural hypothetical situations
  may be meaningless activities.

                                        -- Ken Laws

------------------------------

Date: Tue 20 Nov 84 17:36:43-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: quote  {:-)

Attributed to Marvin Minsky (someone tell me if it's wrong)

        "I'll bet the human brain is a kludge."

                                                        - Richard

------------------------------

Date: Sat 24 Nov 84 14:36:33-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Many-Body Problem

Need software to run 10,000-body simulations?  A VAX Pascal program
is discussed in the November CACM Programming Pearls column.
Optimization brought the run time down from one year to one day.

                                        -- Ken Laws

------------------------------

Date: 22 Nov 84 17:20 PST
From: JonL.pa@XEROX.ARPA
Subject: Macro cacheing: Interlisp-D Interpreter as "malgorithm"?

Jay Ferguson suggests, in his contribution of 18 Nov 84 13:55:58-PST,
that an explanation for the timing differences between using a FOR loop
and using LDIFF in the original "Interlisp-D malgorithm" is because
"Each time you call a FOR statement interpetively the translation
occurs."  This is not the case -- the Interlisp interpreter (in all
implementations of Interlisp, I believe) caches the results of any macro
or CLISP expansion into a hash array called CLISPARRAY; see secton 16.8
of the Interlisp Reference Manual (Oct 1983).  In fact, the figures
supplied by Jay show a speed difference of a factor of 17, which would
be consistent with the basic loop being compiled in LDIFF (a "system"
function) and being interpreted in the FOR.

The question of "cacheing" as noted above is a complicated one, and in
Jay's defense, I can say that it is not at all clearly outlined in the
IRM.   For example, it lays the burden on the "in-core" structure editor
of examining CLISPARRAY on any change, to de-cache the expansions for
any code that is modified; but random modifications (caused by, say,
redefinition of a function upon which the expansion depends), don't
cause de-cacheing, and this is the source of some very obscure bugs.
Furthermore, lots of cruft may stick around forever because the garbage
collector does not reclaim items placed in the cache; for this reason,
it is advisable to ocasionally do (CLRHASH CLISPARRAY).

MacLisp provides three options for macro expansions which are
controllable on a macro-by-macro basis (CLISP is, unfortunately, a
kludge dating to pre-macro Interlisp days -- it could and should be
implemented entirely as a set of macros, so I will view it in that light
for the rest of this discussion): (1) do no cacheing, (2) "displace" the
original list cell containing the macro call with a form which contains
both the original and the expanded code [compiler and interpreter use
the expanded code, prettyprinter uses the original], and (3) cache the
expansion in a hash array called MACROMEMO.  While all these options can
be run in any Lisp that implements macros by giving the expansion
function a pointer to "the whole cell", Common Lisp provides the
*macroexpand-hook* facility so that the cacheing code which is common to
all macros can be put in one place, rather than distributed throughout
the many macro bodies.

-- JonL --

------------------------------

Date: 22 Nov 84 07:15:10 PST (Thu)
From: Carl Kaun <ckaun@aids-unix>
Subject: Linear Programming Algorithms


The recent discussion of the Karmarkar Linear Programming algorithm on
this newsgroup has stirred me to comment.  I wouldn't think a cubic time
linear programming algorithm to be any great deal, indeed I present one
myself shortly.  An algorithm that solves the INTEGER linear programming
problem, however, is something else again.  My understanding is that the
Khachiyan algorithm solved the integer problem in polynomial time.  If
the Karmarkar algorithm does also, then it is truly worthy.  But it has
not been apparent in the popularized discussion I have been reading that
this is so.  Perhaps those on the net who are more in the know can tell
us whether it is or not.

Ever since I took Luenberger's course in Linear and Nonlinear Programming
(Stanford EES department, 1973) I have wondered why people didn't apply
the powerful nonlinear tools to the linear problem.  I played around with
the idea and came up with an algorithm I call "gradient step linear
programming".  I've never done anything with it because it's so simple
and obvious that it seemed someone had to have thought of it before.
Because the algorithm follows the gradient as best it can subject to
the constraints, from a geometric point of view it travels through the
"interior" of the polytope, much as has been described for the Karmarkar
algorithm.  Optimality is achieved in no more than N steps, each step
requiring o(N↑2) numerical operations, where N is  the dimension of the
space.

Mathematical notation isn't easy on a terminal.  I adopt the convention
of representing vectors by preceding them with an underline, as "←x".
Subscripts I represent using a colon, ←c:j being the j-th vector ←c.
The inner product is represented by < * , * >.  I use a functional
form (i.e. f(  )) to represent things like sums. The rest should be
fairly obvious.

A statement of the linear programming problem, convenient for the description
of the algorithm,  follows.  This problem statement is readily converted into
other forms of the linear programming problem.  The problem is to maximize
with respect to the N-dimensional vector ←x the linear functional:
               <←c , ←x >
 subject to the constraints:
               <←a:j , ←x > >= b:j   for j = 1, 2, ..., M
The vector '←c' is often called the cost vector when a minimization problem is
being considered.  M >= N, as otherwise the solution is unbounded.

The procedure for finding an initial feasible vector ←x(0) is essentially
identical to the procedure for finding an optimum vector.  For now an initial
feasible vector ←x(0) in the interior of the polytope defined by the
constraints may simply be assumed.  The initial gradient vector,
maximizing the rate of change subject to the active constraints, is ←c(0) =
←c.  At each stage, the idea of the algorithm is to move along the current
gradient vector (subject to the active constraints) as far as possible, until
a previously inactive constraint is encountered.  The direction of change is
modified by the most recent active constraint, until no motion in the
direction of the gradient is possible.  This is both the formal and the
obvious stopping condition for the algorithm.  The details of the algorithm
follow:

Step 1:  given a current solution point ←x(n), determine the step size s
(giving the next solution point ←x(n+1) = ←x(n) + s*←c(n), identifying at
the same time the next active constraint.
          D:j = < ←x(n) , ←a:j > - b:j    ( >= 0 )
          s:j = D:j / < ←c(n) , ←a:j >    for j in the set of inactive
constraints.
          s = min { s:j } , and the next active constraint has the index j(n)
providing the minimum, which is also the maximum feasible step size.

Step 2:  Apply the Gram-Schmidt procedure (i.e., projection) to remove the
component of the most recent active constraint from the gradient direction, so
that subsequent steps will not result in violation of the active constraint.
It is necessary first to remove all components of previous active constraints
from the newly actived constraint to insure that the adjusted gradient
direction will not violate any previous acitve constraint.
          ←a(n) = ←a:j(n) - sum(k = 0 to (n-1))[
                         ←a(k) * <←a:j(n),←a(k)> / <←a(k),←a(k)> ]

          ←c(n+1) = ←c(n) - ←a(n) * <←c(n),←a(n)> / <←a(n),←a(n)>

Steps 1 and 2 are repeated until ←c(n+1)=0, at which point ←x(n) is the
optimal solution to the linear programming problem.  Additional tests to
detect and recover from degeneracy are easily added.

A detailed proof of optimality is straightforward but somewhat tedious.
Intuitively, the algorithm is optimal because steps are always taken along the
direction maximizing the rate of change of the functional, subject to the
active constraints.  At the optimal point, there is no feasible motion
improving the functional.  Stated differently, the original cost vector lies
in the space spanned by the gradients of the constraints, and this is the
formal (Lagrange) optimization condition.  It is only necessary to add
constraints to the set of active constraints because the optimization space is
convex, and therefore changes in the functional improvement direction (and
reduction in the rate of improvement) result only from encountering new
constraints and having to turn to follow them.

Note that the number of iterations is simply the number of dimensions N of the
space, this being also the number of vectors required to span the space.  Each
iteration entails the removal of o(N) vector components from the new
constraint, and the removal of a vector component entails o(N) multiplications
and additions.  Similarly, determining the step size requires the computation
of o(N) inner products, each requiring o(N) multiplications and additions.
Finding the iniitial feasible vector requires about the same effort in
general.  Thus overall the algorithm presented for solving the linear
programming problem requires O(N**3) arithmetic operations.

An initial feasible point can be determined starting from an arbitrary point
(say the origin), identifying the unsatisfied constraints, and moving in
directions that satisfy them.  It may be more direct to simply start with a
"superoptimal" point, say K*←c for suitably large K, and iterate using
essentially the previously described algorithm along the negative constrained
gradient directions to feasibility.  By duality, the resulting feasible point
will also be optimal for the original problem.

                                                Carl F. Kaun

                                                ckaun@aids-UNIX
                                                415/941-3912

------------------------------

Date: 21 November 1984 1014-EST
From: Staci Quackenbush@CMU-CS-A
Subject: Seminar - Set Theoretic Problem Translation (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

        Name:   Robert Paige
        Date:   November 27, 1984
        Time:   10:00 - 12:00
        Place:  WeH 8220
        Title:  "Mechnical Translation of Set Theoretic Problem
                 Specifications into Efficient RAM Code"


Many computer problems can be expressed accurately and easily in the
following form: 'find the unique set s subset of t satisfying
predicate k(s) and minimizing objective function f(s)'.  Although such
specifications are generally unsolvable, we can provide rather broad
sufficient conditions under which these formal problem statements can
be compiled into efficient procedural implementations with predictable
time and space complexities.  A toy implementation of such a compiler
has been implemented and used within the RAPTS transformational
programming system.

Our methodology depends on three fundamental program transformations,
two of which resemble classical numerical techniques.  The first
transformation solves roots of set theoretic predicates by iterating
to a fixed point.  It turns an abstract functional program
specification into a lower level imperative form with emerging
strategy.  The second transformation is a generalized finite
differencing technique.  It implements program strategy efficiently by
forming access paths and incremental computations.  The third
transformation is a top down variant of Schwartz's method of data
structure selection by basings.  It replaces sets and maps by
conventional storage structures.

The method will be illustrated using two examples -- graph
reachability and digraph cycle testing.

This is a special 2-hour lecture with a 10-minute break in the middle.

------------------------------

End of AIList Digest
********************

∂28-Nov-84  1620	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #162    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Nov 84  16:20:00 PST
Date: Wed 28 Nov 1984 13:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #162
To: AIList@SRI-AI


AIList Digest           Wednesday, 28 Nov 1984    Volume 2 : Issue 162

Today's Topics:
  AI Tools - ML to Interlisp Translator & SYMBOLICS 3670 Software,
  Representation - Nonverbal Meaning Representation,
  Databases - Obsolete Books,
  Publicity - New Scientist AI Series,
  Brain Theory - PBS Series on the Brain & Minsky Quote,
  Linguistics - Language Simplification & Natural Language Study,
  Seminars - The Structures of Everyday Life  (MIT) &
    Language Behavior as Distributed Processing  (Stanford) &
    Full Abstraction and Semantic Equivalence  (MIT)
----------------------------------------------------------------------

Date: 27 Nov 84 12:54:44 EST
From: DIETZ@RUTGERS.ARPA
Subject: ML to Interlisp Translator Wanted

I'd like to get a translator from ML to Interlisp.  Does anyone have one?

Paul Dietz (dietz@rutgers)

------------------------------

Date: Tue, 27 Nov 84 12:59:42 pst
From: nick pizzi <pizzi%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa>
Subject: SYMBOLICS 3670 software

     Would anyone happen to know whether or not the SYMBOLICS machines
(specifically, the 3670) have PROLOG and/or C as available language
options?

     Furthermore, does the 3670 have any available software packages
for image processing (especially, symbolic image processing)?

     Thank-you in advance for any information which you might provide!

                                                Sincerely,
                                                nick pizzi

------------------------------

Date: Wed, 28 Nov 84 09:59:31 pst
From: Douglas young <young%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa>
Subject: Nonverbal meaning

  Is there anyone out there working on completely nonverbal meaning
representations of words and sentences?  Although I have been working
on this problem for a very substantial time, and have reached some
significant solutions ( which I expect to have published in the form
of a book , the draft ms for which is already completed, and in
several papers }, during 1985 ), I have not been able to date to
discover anyone else who is working on this specific aspect of NLU.
However, it is impossible to believe that there are no others working
on this, and a newly acquired membership of the AIList appears to be
an invaluable way of finding out who is involved and where they are.
If you are working in this area, or if you know of anyone who is,
please would you send me a message ( network address as in header )
with a short note of what is being done, and include a postal address;
alternatively, write or call me.

      Douglas Young
      Dept. of Computer Science,
      University of Manitoba,
      Winnipeg,
      Manitoba, R3T 2N2
      Canada                  Tel: (204) 474 8366  (lab)
                                         474 8313  (messages)
 PS: {Two original papers describing some of the principles of the techniques
     I employ, that were published in the medical literature during 1982-83,
     are largely out of date in almost every respect ( except for some of the
     neurological arguments, that form the foundation of the principles ),so
     I am not including their references here.

------------------------------

Date: Tue, 27 Nov 84 18:05:24 mst
From: jlg@LANL (Jim Giles)
Subject: obsolete books?

> Sony has recently introduced a portable compact optical disk player.
> I hear they intend to market it as a microcomputer peripheral for
> $300.  I'm not sure what its capacity will be, so I'll estimate it at
> 50 megabytes per side.  That's 25000 ascii coded 8 1/2x11 pages, or
> 1000 compressed page images, per side.  Disks cost about $10, for a
> cost per word orders of magnitude less than books.

The capacity of a normally formatted compact disc (audio people spell it
with a 'c') is about 600 megabytes.  That's without counting the error
correcting information.  The number is for about one hour of music sampled
with two 16-bit channels at a rate of 44.1 kHz.  Furthermore, some companies
are already demonstrating 'write once' disks with about 500 megabytes
for use as computer peripherals.  I've even seen one proposal for an
erasable disk using magneto-optical technology.

It has already been suggested that the advent of very cheap mass storage
devices will soon replace dictionaries, encyclopepias, catalogues, etc.
There has also been talk of software (such as spelling checkers) which
require very large data bases being either cheap or public domain.  I
think it will be a while before books are replaced, though.  Nobody wants
to carry video monitor in their briefcase just to catch up on their
favorite science fiction interests.  Besides, paperback books are still
cheaper than compact discs by about a factor of 4 or more.

I'm holding off buying new drives for my home computer for a while.  This
new stuff seems to be worth waiting for.

------------------------------

Date: 27 Nov 84 17:00:07 EST
From: DIETZ@RUTGERS.ARPA
Subject: New Scientist AI Series

The British magazine New Scientist is running a three part series on AI.
The first article, in the Nov. 15 issue, has the title "AI is stark naked
from the ankles up".  It has some very interesting quotes from John McCarthy,
W. Bledsoe, Lewis Branscomb at IBM and others.  The article is critical
of the way AI has been oversold, of the quality (too low) and quantity
(too little) of AI research, and of the US reaction to the Japanese new
generation project, especially Feigenbaum and McCorduck's book.

------------------------------

Date: Wed 28 Nov 84 11:53:16-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: PBS Series on the Brain

The PBS series on the brain has focussed each week on specific neural
systems and their effects on behavior.  The last show concentrated on
hearing and speech centers, and had a particularly enlightening
example.  It showed a lawyer who had suffered damage to his hearing or
linguistic centers.  (Sorry, I don't remember exactly where.)  He
still had a normal vocabulary and could understand most sentences,
although slowly and with great difficulty.  He was unable to parse or
store function words, however.  When asked "A leopard was killed by a
lion.  Which died?", he was unable to answer.  (He also knew that he
had no way of determining the answer.)  When asked "My uncle's sister
..., is it a man or a woman?" he was similarly unable to know.

Another example was a woman who could not recognize faces, even when
she was presented with a picture of her interviewer and told who it
was.  She could describe the face in detail, but there was no flash
of recognition.  She lives in a world of strangers.

A previous show desribed various forms of amnesia, and the role of the
hippocampus in determining which events are to be stored in long-term
memory.  Or rather, in the conscious LTM.  One subject was repeatedly
trained on the Tower of Hanoi puzzle; each time it was completely
"new" to him, but he retained strategy skills learned in each session.

The question was raised why no one can remember events prior to the
age of five.  I suppose that we create a mental vocabulary during the
first years, and later record our experiences in terms of that
vocabulary.  (It would be awkward, wouldn't it, if the vocabulary
changed as we got older?  Memories would decay as we lost the ability
to decode them.)  This suggests that we might be unable to learn
concepts such as gravity, volume, and cooperation if we do not learn
them early enough.  I'm sure there must be evidence of such phenomena.

The last two shows in the series will be shown Saturday (in the San
Francisco area).

                                        -- Ken Laws

------------------------------

Date: Mon, 26 Nov 1984  03:27 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Re: Quote, V2 #161

I certainly have suggested that the human brain is a kludge, in the
sense that it consists of many complex mechanisms, accumulated over
the course of evolution, a lot of which are for correcting the bugs in
others.

However, this is not a useful quotation for public use, because
outside of engineering, the word "kludge" is not in the general
language.  There isn't even any synonym for it.  The closest phrase
might have been "Rube Goldberg device" -- but that, too, is falling
out of use.  Anyway, a Rube Goldberg device did not have the right
sense, because that cartoonist always drew machines which were
complicated serial devices with no loops and, hence, no way to correct
bugs.  My impression is that a "kludge" is a device which actually
usually works, but not in accord with neat principles but because all
or most of its bugs have been fixed by adding ad hoc patches and
accessories.

By the way, the general language has no term for "bug" either.
Programmers mean by "bug" the mechanism responsible for an error,
rather than the surface error itself.  The lack of any adequate such
word suggests that our general culture does not consider this an
important concept.  It is no wonder, then, that our culture has so
many bugs.

------------------------------

Date: Mon, 26 Nov 84  8:20:27 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: Language Simplification

On Frawley on Gillam on simplification:

You needn't go so far south for pen/pin homophony, it occurs in certain
midwestern dialects and I believe even in New Jersey, as merger pure and
simple.  And of course you are talking not about homophony but about shifted
contrast such that `pin' of your dialect is "homophonous" with `pen' of the
southern dialect.  (Is English `showed' "homophonous" with the French word
for `hot'?)

Phonological systems do change in the ways that you deny, as
witness for example the falling together of many vowels to i in modern
Greek (classical i, ei, oi, y, long e (eta), yi all become high front i),
and the merger of several Indo-European vowels in Sanskrit a.

I have not seen Gillam's comments (just joined the list), so let me say
too that languages do preserve systematic contrasts while shifting their
location, and that the observation about southern dialects of US English
is correct.  Whether the result of change is merger or relocated contrast
depends on sociological as well as physiological and psychoacoustic factors,
and no simple blanket statement fits all cases.

        Bruce Nevin

------------------------------

Date: Mon, 26 Nov 1984  03:12 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Re: Natural Language Study, V2 #160


Bravo, Dyer!  As you suggest, there is indeed much to learn from the
study of natural language -- but not about "natural language itself";
we can learn what kinds of manipulations and processes occur in the
under-mind with enough frequency and significance that it turns out to
be useful to signify them with surface language features.

For example, why do all languages have nouns and adjectives?  Because
the brain has some way to aggregate the aspects of "objects" and
retrieve these constellations of partial states of mind.  Why
adjectives?  To change particular properties of noun-objects.  Why put
adjectives near the nouns?  So that it is easy to recognize which
properties of what to modify.  Now, if we consider the which surface
relations are easiest to recognize by machinery, the near-ness of
words is surely among the easiest of all -- so we can expect that
human societies will find an important use for this.  Thus, if
adjective-noun relations are "universal" in human languages. it need
not be because of any mysterious syntactic apriori built into some
innate language-organ; it could be because that underlying cognitive
operation -- of modifying part of a representation without wrecking
the rest of it -- is a "cognitive universal".  Similarly, the study of
how pronouns work will give us clues about how we link together
different frames, scripts, plans, etc.

All that is very fine.  We should indeed study languages.  But to
"define" them is wrong.  You define the things YOU invent; you study
the things that already exist.  Then, as in Mathematics, you can also
study the things you define.  But when one confuses
the two situations, as in the subjects of generative linguistics
or linguistic competence -- ah, a mind is a terrible thing to waste,
as today's natural language puts it.

------------------------------

Date: 27 Nov 1984 11:13-PST (Tuesday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Natural Language


        The reason why it is important to study natural languages
"on their own" and to understand language degredation etc. is because
language influences how its speakers think.  This idea, known commonly
as the "Whorf hypothesis" has its correlate in computer languages
and in potential interlingua.  The usual examples include AmerIndian
languages which have little concept of time.
        If you have only Fortran to program in, many elegant programming
solutions simply will not present themselves.  The creation of
higher level languages allows the programmer to make use of complex
data structures such as 'predicates' and 'lists'  instead of addresses.
        These higher level data structures correspond to the concepts
available in a natural language.  Primitive languages which exist mainly
for simple communication will not allow the kind of
thinking(programming) as a language with "higher level" concepts
(data structures).
        In the same way that a conceptually rich language(like Sanskrit)
allows greater expression that Haitian Creole does, and that
LISP vs. assembly does, Sastric Sanskrit functions as the ideal
interlingua because of the nature of its high level data structures
(i.e. is formal and yet allows expression of poetry and metaphor).
And in the same way that a particular programming language is chosen
over another for an application, Sastric Sanskrit should be chosen
(or at least evaluated) for those doing work in Machine Translation.

Rick Briggs

------------------------------

Date: 25 Nov 1984  22:38 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - The Structures of Everyday Life  (MIT)

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                    The Structures of Everyday Life

                              Phil Agre

             Wednesday, November 28; 4:00pm  8th floor playroom



Computation can provide an observation vocabulary for gathering
introspective evidence about all manner of everyday reasoning.  Although
this evidence is anecdotal and not scientific in any traditional sense, it
can provide strong constraints on the design of the central systems of
mind.  The method is cyclical: attempts to design mechanisms to account
for the phenomenology of everyday activity suggest new classes of episodes
to look out for, and puzzling anecdotes show up weaknesses in designs and
suggest improvements.

I have been applying this method particularly to the study of routines,
the frequently repeated and phenomenologically automatic rituals of which
most of daily life is made.  Some common routines in the lives of people
like me include choosing the day's clothes, making breakfast, selecting a
turnstile in the subway, listening to a familiar piece of music, beginning
and ending conversations, picking up a coffee mug, and opening the day's
mail.  It is not reasonable to view a routine as an automated series of
actions, since people understand what they're doing when carrying out
routine actions at least well enough to recover sensibly if things don't
proceed in a routine way.

I propose to account for the phenomenology of the development of mental
routines in terms of the different stages of processing that arise in the
interaction of a few fairly simple mechanisms.  These stages appear vaguely
to recapitulate the stages of development of cognition in children.

This talk corresponds roughly to my thesis proposal.



COMING SOON: Jonathan Rees [Dec 5], Alan Bawden [Dec 12]

------------------------------

Date: Tue, 27 Nov 1984  23:52 PST
From: KIPARSKY@SU-CSLI.ARPA
Subject: Seminar - Language Behavior as Distributed Processing 
         (Stanford)


Jeff Elman (Department of Linguistics, UCSD)
"Parallel  distributed  processing:   New  explanations  for
                        language behavior"

        Dec. 11, 1984, 11.00 A.M.
        Stanford University, Ventura Hall Conference Room

Abstract:

Many students of human behavior  have  assumed  that  it  is
fruitful  to  think  of the brain as a very powerful digital
computer.  This metaphor  has  had  an  enormous  impact  on
explanations  of  language  behavior.   In  this talk I will
argue that the metaphor is  incorrect,  and  that  a  better
understanding  of  language  is gained by modelling language
behavior with parallel distributed processing (PDP) systems.
These  systems offer a more appropriate set of computational
operations, provide richer insights into behavior, and  have
greater biological plausibility.

I will focus on three specific areas  in  which  PDP  models
offer  new explanations for language behavior: (1) the abil-
ity to simulate rule-guided behavior without explicit rules;
(2)  a  mechanism  for analogical behavior; and (3) explana-
tions for the effect of context on  interpretation  and  for
dealing with variability in speech.

Results from a PDP model  of speech perception  will be pre-
sented.

------------------------------

Date: 27 November 1984 09:21-EST
From: Arline H. Benford <AH @ MIT-MC>
Subject: Seminar - Full Abstraction and Semantic Equivalence  (MIT)

           [Forwarded from the MIT bboard by Laws@SRI-AI.]


       APPLIED MATHEMATICS AND THEORY OF COMPUTATION COLLOQUIUM


                  "FULL ABSTRACTION AND SEMANTIC EQUIVALENCE"

                                Ketan Mulmuley
                          Carnegie Mellon University


                       DATE:  TUESDAY, DECEMBER 4, 1984
                       TIME:  3:30PM  REFRESHMENTS
                              4:00PM  LECTURE
                      PLACE:  2-338

A denotational semantics is said to be fully abstract if denotations of two
language constructs are equal whenever these constructs are operationally
equivalent in all programming contexts and conversely.  Plotkin showed that the
classical model of continuous functions was not a fully abstract model of typed
lambda calculus with recursion.  We show that it is possible to construct a
fully abstract model of typed lambda calculus as a submodel of the classical
lattice theoretic model.

The existence of "inclusive" predicates on semantical domains play a key role
in establishing semantic equivalence of operational and denotational
semantics.  We give a mechanizable theory for proving such existences.  In
fact, a theorem proving has been implemented which can almost automatically
prove the existence of most of the inclusive predicates which arise in
practice.


HOST:  Professor Michael Sipser

------------------------------

End of AIList Digest
********************

∂29-Nov-84  1254	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #163    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Nov 84  12:51:38 PST
Date: Thu 29 Nov 1984 09:25-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #163
To: AIList@SRI-AI


AIList Digest           Thursday, 29 Nov 1984     Volume 2 : Issue 163

Today's Topics:
  Philosophy - Dialectics,
  Seminars - Aesthetic Experience  (Berkeley) &
    Phonetics, Discourse, Semantics  (CSLI Stanford) &
    The KEE Knowledge Engineering System  (Stanford)
----------------------------------------------------------------------

Date: Tue, 27 Nov 84 20:42:29 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Dialectics

Joel Isaacson (USC) and I (Frawley, Delaware) have recently exchanged, briefly,
ideas about DIALECTICS. Issacson is using dialectics in a theory of image
processing; I am using dialectics in my own work on Soviet theories of
language and cognition and the use of Soviet theories to explain
various quandaries about such things as language learning and text
processing. We thought it would be appropriate to have a general
discussion of dialectics on the AIList.

I have agreed to begin the discussion with a general introduction. Below are
some basic statements on what I see to be the nature and implications
of dialectics, along with some comments on how I see these ideas relating
 to problems of language and cognition. I offer these ideas not as
definitive statements, but as a means to get the ball rolling on a
discussion of dialectics. We (Isaacson and I) would appreciate any
commentary, arguments, etc. that can be given.


1. What is, and Whence, Dialectics?

Dialectics is, first of all, a method. It is a method of analyzing any
phenomenon not in terms of the phenomenon as an isolated entity, but
in terms of the phenomenon in its opposition to other phenomena and how
the opposition of two phenomena give rise to a third phenomenon (the
classic thesis, antithesis, synthesis trichotomy from Hegel). This idea
of opposition can of course be traced back in Western philosophy to Plato
(who loved oppositions), but is more conveniently situated in the work
of Marx. Marx objected to both idealism and positivism: to the former
because it ultimately situated knowledge in one metaphysical entity
(e.g., the pre-programmed subject, as Kant and Piaget argue, or in the
world of pure forms, as Plato argued) and to the latter because it
situated knowledge wholly in terms of the object of knowledge (i.e.,
the world irrespective of the perceiving subject). Marx saw knowledge
only in the dialectical struggle of the perceiving subject and perceived
object which unify in their struggle to produce knowledge. Dialectics is
a way of walking between hopeless metaphysics (idealism) and hopeless
banality (the world). Thus, it does no good simply to talk about
either simple properties of the subject or of the object since
neither exists without the other and neither the subject nor the
object has any privileged status in epistemology. If an epistemology
privileges the subject at the expense of the object, one gets
Piagetian psychology; if one privileges the object at the expense of the
subject, one gets behaviorism, Carnap, or the early Wittgenstein.


2. What does dialectics imply (I use "dialectics" in the  singular since
it is a totality, like the word "linguistics")?

First, it implies that knowledge is the activity of constant struggle.
What is primary in dialectics is not knowledge, but knowING. What is
primary in any dialectical epistemology is not knowledge structures,
but the BUILDING OF KNOWLEDGE. As Leontiev has said, heuristics are
more important than algorithms.

Second, it implies that development never ends. If knowing is a constant
struggle of opposites which unite in synthesis, and if that synthesis then
is opposed to something else and unites with it to produce another
synthesis, knowing never stops. We suffer, in developmental theory, from
a Piagetian epistemological blindness which views development as stopping
after logical operations: thereafter only mere learning occurs. When
studies have shown that only 50% of the U.S. population has achieved
logical operations, I begin to doubt Piaget and begin to side with
Luria, who has shown (Cognitive Development) that development, because
of its dialectical underpinnings, never stops.

Third, it implies that one must be a materialist. The subject is not
a metaphysical entity, but located in the world; the object is not
a metaphysical entity, but located in the world; the dialectical
synthesis of the two is not a metaphysical entity, but a process and
product conditioned by the material circumstances and nature of the
subject and object: dialectics secularizes knowing.

Fourth, it implies that one must always consider history. If knowing is
tied to dialectics in material circumstances, then one must also
realize that circumstances can only be historically given. As
Derrida has argued in his introduction to Husserl's Geometry, there
are no extra-systemic a priori ideas, only historical a priori ideas.
In this way, biological givens are also historically given because
both ontogenesis and phylogenesis are historical.

3. Two Psycholinguistic Implications of Dialectics

It is very chic these days to abandon linguistic competence in favor
of communicative competence by arguing that linguistic competence is
idealized and that communicative competence (pragmatics, speech acts,
intentionality, etc.) is "more real" because communicative competence
considers how language is used in the world. Dialectics shows that this
is a pseudo-argument.

Communicative competence still privileges the subject only, by giving
taxonomies of intentions which the subject felicitously deploys
"in the world." How is this done? That is the "real" question.
Pragmatics, in criticizing Chomskyan competence for being idealized
falls prey to its own criticisms since it still privileges the
subject and idealized linguistic knowledge just one step higher
than the sentence: communicative competence is another from of
idealism (for a very brief discussion, see my review in December 1984
issue of Language, p. 967).

Dialectics has another implication for theories of text processing.
It is typical in text theory to privilege either the subject or the
object: if privileging the former, one acconts for text processing
in terms of mental structures -- schemas, frames, scripts; if
privileging the latter, one accounts for text processing in terms
of the structure of the text -- rhetorical structure, propositional
hierarchies, complexity, etc. A dialectical model would ask how
schemas and text structure interact.

Dialectical considerations of text processing have implications for
AI. In Schank and Abelson's model, e.g., the script or frame is
seminal. From a dialectical model, the script is less important than
the ways by which the machine "decides" to access the script to
begin with: the knowledge structure is less important than the
procedures to deploy the knowledge structure since that is the
point where the machine as subject interacts with the text as
object.

Well, I've gone on perhaps too long for some preliminary statements
about dialectics, so I'll stop here. Any comments??

Bill Frawley

20568.ccvax1@udel

------------------------------

Date: Wed, 28 Nov 84 17:13:33 pst
From: chertok%ucbcogsci@Berkeley (Paula Chertok)
Subject: Seminar - Aesthetic Experience  (Berkeley)

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

SPEAKER:        Thomas  G.  Bever,  Psychology   Department,
                Columbia University

TITLE:          The Psychological basis of aesthetic experi-
                ence:  implications for linguistic nativism

    TIME:                Tuesday, December 4, 11 - 12:30
    PLACE:               240 Bechtel Engineering Center
    DISCUSSION:          12:30 - 2 in 200 Building T-4

ABSTRACT:       We define the notion of Aesthetic Experience
                as   a   formal   relation   between  mental
                representations:   an  aesthetic  experience
                involves  at least two conflicting represen-
                tations that are  resolved  by  accessing  a
                third  representation.   Accessing the third
                representation releases  the  same  kind  of
                emotional  energy as the 'aha' elation asso-
                ciated with discovering the  solution  to  a
                problem. We show how this definition applies
                to  various  artforms,  music,   literature,
                dance.   The  fundamental aesthetic relation
                is similar to the  mental  activities  of  a
                child  during  normal cognitive development.
                These considerations explain the function of
                aesthetic  experience:  it elicits in adult-
                hood the characteristic mental  activity  of
                normal childhood.

                The fundamental activity revealed by consid-
                ering the formal nature of aesthetic experi-
                ence involves developing  and  interrelating
                mental  representations.   If  we  take THIS
                capacity  to  be  innate  (which  we  surely
                must),   the question then arises whether we
                can account for the phenomena that are  usu-
                ally argued to show the unique innateness of
                language as a mental organ.  These phenomena
                include  the  emergence of a psychologically
                real grammar,  a critical  period,  cerebral
                asymmetries.     More    formal   linguistic
                properties may be accounted for as partially
                uncaused (necessary) and partially caused by
                general  properties  of  animal  mind.   The
                aspects  of  language  that may remain unex-
                plained (and therefore non-trivially innate)
                are  the  forms of the levels of representa-
                tion.

------------------------------

Date: Wed 28 Nov 84 17:24:47-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminars - Phonetics, Discourse, Semantics  (CSLI Stanford)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                   ABSTRACT OF TODAY'S SEMINAR
                   ``Parsing Acoustic Events''

This seminar addresses the problem of formulating a language-independent
representation of the acoustic aspects of natural, continuous speech from
which a general parser using language-specific grammars can recover
linguistic structure.  This decomposition of the problem permits a
representation that is stable over utterance situations and provides
constraints that handle some of the difficulties associated with partially
obscured or ``incomplete'' information. A system will be described which
contains a grammar for parsing higher-level (phonological) events as well
as an explicit grammar for low-level acoustic events. It will be shown that
the same techniques for parsing syntactic strings apply in this domain.  The
system thus provides a formal representation for physical signals and a way
to parse them as part of the larger task of extracting meaning from sound.
                                              --Meg Withgott
                           ←←←←←←←←←←←←

                ABSTRACT OF NEXT WEEK'S SEMINAR
           ``The Structures of Discourse Structure''

This talk will introduce a theory of discourse structure that attempts to
answer two rather simple questions, namely: What is discourse? What is
discourse structure? In this work (being done jointly with Sidner at BBN)
discourse structure will be seen to be intimately connected with two
nonlinguistic notions--intention and attention. Intentions will be seen to
play a primary role not only in providing a basis for explaining discourse
structure, but also in defining discourse coherence, and providing a coherent
notion of the term ``discourse'' itself.  A main thesis of the theory is that
the structure of any discourse is a composite of three interacting
constituents: the structure of the actual sequence of utterances in the
discourse, a structure of intentions, and an attentional state. Each of these
constituents of discourse structure both affects and is affected by the
individual utterances in the discourse.  The separation of discourse
structure into these three components allows us to generalize and simplify a
number of previous results and is essential to explaining certain discourse
phenomena. In particular, I will show how the different components contribute
to the proper treatment of various kinds of interruptions, as well as to
explanations of the use of certain types of referring expressions and of
various expressions that function directly to affect discourse structure.
                                        --Barbara J. Grosz
                        ←←←←←←←←←←←←

                  ABSTRACT OF NEXT WEEK'S TINLUNCH
    Syntactic Features, Semantic Filtering, and Generative Power

There is a trade-off in linguistic description using grammars with a syntax
and a separate semantics, such as GPSG.  One can often either use a
syntactic feature or appeal to semantic filtering to achieve the same ends.
Current GPSG countenances no semantic filtering, i.e. does not overgenerate
strings in the syntax and then let the semantics throw some away as
`uninterpretable'.  In the Tinlunch I would like to discuss this position
in light of some work I did in my dissertation which looks like it requires
semantic filtering, and in light of a paper by Marsh & Partee which shows
that adding certain types of semantic filtering to a grammar greatly
increases the generative power.                  --Peter Sells

                         ←←←←←←←←←←←←


            CSLI WORKSHOP ON THE SEMANTICS OF PROGRAMS

Tuesday, December 4, 1984
Location: The Bach Dancing and Dynamite Society, Princeton CA
          (a suburb of Half-Moon Bay)

There are long-standing traditions for the study of natural language
semantics and CSLI projects have been extending and reinterpreting them.
There is a briefer, but substantial, tradition for the study of the
semantics of programming languages.  Over the past few months, there have
been a series of presentations and discussions about similarities and
differences between the semantic accounts of natural and computational
languages.  Theories of natural language semantics have raised a number of
issues.  The purpose of the workshop is to discuss how some of these
theories can give rise to better accounts of the relation between
programs/program executions and the world.  Participation in the workshop
is by invitation only.  If you are interested in being invited to the
workshop, contact Ole Lehrmann Madsen (Madsen at SU-CSLI). If you have any
questions regarding the workshop you may contact Terry Winograd (TW at
SU-SAIL) or Madsen.
                         ←←←←←←←←←←←←

                        PH.D. PROPOSAL

On Tuesday, December 4, from 3:15 p.m. to 5:05 p.m., in Bldg. 200-217, Kurt
Queller will talk about ``Active Exploration with syntagmatic routines in
the child's construction of grammar:  Some phonological perspectves.'' Based
on detailed longitudinal analysis of data from 3 one-year-olds, the proposed
dissertation will provide a typology of syntag-matic phonological routines
or ``word-recipes'' used by young children in bulding a repertoire of
pronounceable works.  Then, it will show how individual children exploit
particular combinations of routines in constructing a coherent phonological
system.  Extensive synchronic variability and changes over time will be
accounted for in terms of the child's systematic exploration of the options
implicit in the resulting system.

------------------------------

Date: Mon 26 Nov 84 11:15:02-PST
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - The KEE Knowledge Engineering System  (Stanford)

      [Forwarded from the SIGLUNCH distribution by Laws@SRI-AI.]

SPEAKER:     Richard Fikes, Director
             Knowledge Systems Research and Development
             IntelliCorp, Inc.

ABSTRACT:    The KEE System - An Integration of Knowledge-Based
             Systems Technology

DATE:        Friday, November 30, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

IntelliCorp has developed an integrated collection of  representation,
reasoning,  and  interface  facilities  for  building  knowledge-based
systems called  the  Knowledge  Engineering  Environment  (KEE).   The
system's components include (1) a frame-based representation  facility
incorporating features  of  UNITS,  LOOPS, and  KL-ONE  that  supports
taxonomic definition  of  object  types,  structured  descriptions  of
individual objects,  and  object-oriented  programming;  (2)  a  logic
language  for  asserting  and  deductively  retrieving  facts;  (3)  a
production rule language with  user-controllable backward and  forward
chainers that  supports  PROLOG-style  logic programming;  and  (4)  a
graphics work bench for  creating display-based user interfaces.   KEE
uses  interactive  graphics  to  facilitate  the  building,   editing,
browsing, and  testing of  knowledge  bases.  A  primary goal  of  the
overall  design  is  to  promote  rapid  prototyping  and  incremental
refinement  of  application  systems.    KEE  has  been   commercially
available since August 1983, and has been used by customers to build a
wide range  of application  systems.   In this  talk  I will  give  an
overview  of  the   KEE  system  with   particular  emphasis  on   its
representation and reasoning facilities, and discuss ways in which the
system provides significant leverage for its users.



Paula

------------------------------

End of AIList Digest
********************

∂30-Nov-84  0005	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #164    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 30 Nov 84  00:02:40 PST
Date: Thu 29 Nov 1984 21:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #164
To: AIList@SRI-AI


AIList Digest            Friday, 30 Nov 1984      Volume 2 : Issue 164

Today's Topics:
  Algorithms - Karmarkar Algorithm & Linear Programming,
  Seminars - Search Complexity & User Interfaces  (IBM-SJ) &
    A Semantical Definition of Probability  (CSLI Stanford) &
    Learning in Stochastic Networks  (CMU)
----------------------------------------------------------------------

Date: 27 Nov 1984 17:19:38-EST (Tuesday)
From: S.Miller@wisc-rsch.arpa
Subject: Karmarkar Algorithm

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

The Karmarkar algorithm was presented at STOC (Symposium on Theory of
Computing) on May 1, 1984 (STOC '84, p. 302)
"A New Polynomial time Algorithm on Linear Programming".
The STOC proceedings are available from the ACM if your
location doesn't have them.

------------------------------

Date: Mon 26 Nov 84 17:33:08-PST
From: Walter Murray <OR.MURRAY@SU-SIERRA.ARPA>
Subject: Linear Progamming Algorithms.

         [Forwarded from the Stanford bboard by CKaun@AIDS-UNIX.]

Some recent bboard messages have referred to linear programming. The
algorithm by Karmarkar is almost identical with iterative reweighted
least squares (IRLS). This latter algorithm is used to solve approximation
problems other than in the l2 norm. It can be shown that the form of
LP assumed by Karmarkar is equivalent to an l infinity approximation
problem. If this problem is then solved by the IRLS algorithm the
estimates of the solution generated are identical to those of the
Karmarkar algorithm (assuming certain free choices in the definition
of the algorithms). Perhaps it should be added that the algorithm is
not held in high regard in approximation circles. To solve a
an l infinity problem it is usually transformed to an LP and solved using
the simplex method.

A message from Kaun (forwarded by Malachi without a heading)
described an algorithm for LP which Kaund claimed requires o(n↑3) work. It
is easy to demonstrate the algorithm may fail to converge to the solution.
The following is a cross-section of a hole consisting of straight
sides. Water is poured into this hole from the point x.


                  x


                o                       o
                .                       .
                o                       o
                 .                     .
                  .                   .
                   o                 o
                      .         .
                           o

The water hits a facet. It continues to fall until it hits a second
facet which is a vertex. Unless the water is prepared to leave the first facet
hit it will not reach the bottom.

------------------------------

Date: 27 Nov 84 11:45:16 PST (Tue)
From: Carl Kaun <ckaun@aids-unix>
Subject: Linear Programming Algorithm

Murray is correct -- the algorithm as stated will not usually converge
to the solution.  One problem is that removing components from the
gradient does not automatically force it to zero after N steps, as I
asserted.  It looks to me like the gradient stepping idea can still be
used in a more complicated scheme, and that the computational time for
the algorithm will be o(M*N↑2), where M is the number of constraints.
But I want to verify the details better than I did for my original
message before saying more.

I still wonder if finding such a solution to the continuous (as opposed to
the integer) linear programming problem has any significance.

                                                ckaun@aids-unix

------------------------------

Date: 27 Nov 84 22:31:30 PST (Tue)
From: Carl Kaun <ckaun@aids-unix>
Subject: Gradient Step Linear Programming (again)


Well, here we go again.  Let's see if this try stands up to scrutiny.
The claim is that the algorithm following (gradient step linear
programing) solves the linear programming problem in at most
o(M↑2*N +M*N↑3) operations.  I still don't know if that has any significance.
As before, the idea is to step, as best one can subject to the constraints,
along the gradient.  The terminating conditions are similar to the
algorithm given previously.

As before, the mathematical notation represents vectors by preceding them
with an underline, as "←x".  Subscripts are represented using a colon, ←c:j
being the j-th vector ←c. The inner product is represented by < * , * >.  A
functional form (i.e. f(  )) is used to represent things like sums. The rest
should be fairly obvious.

The statement of the linear programming problem is also as before, being to
maximize with respect to the N-dimensional vector ←x the linear functional:
               <←c , ←x >
 subject to the constraints:
               <←a:j , ←x > >= b:j   for j = 1, 2, ..., M
M>= N, as otherwise the solution is unbounded.

Assume for the moment an initial feasible vector ←x(0) in the interior (so
that there are initially no active constraints) of the polytope defined by
the constraints. ←c:0 = ←c.  All constraints are potentially active.

A.  From the current solution point ←x(n), find the constraint limiting
motion in the direction ←c:n, and the maximum feasible step size s>0  giving
the next solution point:  ←x(n+1) = ←x(n) + s*←c:n
    For j = 1, 2, ... M and j not a currently active constraint, compute
          D:j = <←x(n), ←a:j> - b:j    ( >= 0 )
          s:j = - D:j / <←c(n), ←a:j>
    s = min { s:j | s:j>0} , and the next active constraint has the index
j(n) providing the minimum.

B.  The next step is to compute a movement direction aligned with the
gradient (thus enabling improvement in the functional) that also satisfies
the active constraints.  The first active constraint was identified in the
previous step, thus:
          ←c(0) = ←c - ←a:j(n) * <←c, ←a:j(n)> / <←a:j(n), ←a:j(n)>

C.  Next determine which of the constraints active in the previous cycle
are active in this step, and modify the movement direction accordingly.  A
previously active constraint a:j is active in this cycle if
          <←a:j, ←c(i)> < 0.
That is, motion along the current direction ←c(i) would violate the
constraint.  If the constraint is active, then the Gram-Schmidt
procedure is applied to ←a:j to orthogonalize the vectors involved and
thereby determine the component to be removed from ←c(i), yielding ←c(i+1).
          ←a(i) = ←a:j - sum (n=0 to i-1) [ <←a:j, ←a(n)> / <a:j, a:j> ]
          ←c(i+1) = ←c(i) - ←a(i) * <←c(i), ←a(i)> / <←a(i), ←a(i)>
When all of the previously active constraints have been determined to be
either active or inactive for the current cycle, the next step direction is
          ←c:n = ←c(i), for the latest i.

(It appears necessary, for each determination of a ←c(i), to scan the
entire set of constraints which were active in the previous cycle (but have
not yet been determined to be active in the current cycle) before deciding
that none is active in the current cycle.  Practically, there will
be only one active constraint in most of the cycles, and the
trajectory of the algorithm passes through various of the facets of the
polytope most of the time.)

The stopping condition results when ←c(i) = ←0; that is, when the objective
gradient ←c lies in the cone formed from the combination of negatively
scaled gradients of the constraints.  This is the Kuhn-Tucker condition
of optimality.  Equivalently,N (linearly independent) constraints are found
to be active.  I don't remember that the Kuhn-Tucker conditions are
sufficient, but in any event this is the optimal point because there
is no feasible motion direction which improves the objective.

Unlike the previous algorithm, in this the identification of new constraints
can result in movement away from a previously active constraint.  When this
happens, the previously active constraint can be totally removed from further
consideration, due to the convexity of the problem (this assertion seems
obvious, but has not been PROVED by me).  The algorithm
encounters a new active constraint each cycle, and therefore converges
in at most M cycles, this being the maximum number of constraints that
can be newly encountered.  In practice again, the trajectory of the
algorithm will generally be such that convergence will occur in many fewer
cycles than M.

Steps A-C are repeated until the stopping condition occurs.

As indicated above, the algorithm converges in at most M cycles.  For each
cycle, step A requires o(N) multiplications and additions to compute the
inner product, etc. for each of o(M) constraints, for a total of o(MN)
operations.  Step B requires o(N) operations, which scarcely affects the
overall timing.  Step C can potentially result in the identification of N-1
active constraints.  Each such identification requires the removal of o(N)
orthogonal components, and each such removal entails o(N) operations, for
an overall count of o(N↑3) operations to remove the effects of previously
identified active constraints.  Also, o(N) constraints may have to be
scanned to determine if they are active for each such identification,
each such determination requiring o(N) operations; resulting again in
a total of o(N↑3) operations for step C.  Performing steps A and C therefore
requires o(M↑2*N + M*N↑3) operations.

An initial feasible point can be determined starting from an arbitrary point
(say the origin), identifying the unsatisfied constraints, and moving in
directions that satisfy them.  It may be more direct to simply start with a
"superoptimal" point, say K*←c for suitably large K, and iterate using
essentially the previously described algorithm along the negative constrained
gradient direction to feasibility.  The resulting feasible point
will also be optimal for the original problem.

                                                Carl F. Kaun

                                                ckaun@aids-UNIX
                                                415/941-3912

------------------------------

Date: Wed, 28 Nov 84 17:12:55 PST
From: IBM San Jose Research Laboratory Calendar
      <calendar%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminars - Search Complexity & User Interfaces  (IBM-SJ)

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193

                             CALENDAR
                       (DECEMBER 3 - 7, 1984)

  Wed., Dec. 5 Computer Science Seminar
  10:00 A.M.  HOW HARD IS NP-HARD?
  2C-012      This talk examines the average complexity of
            depth-first search for two different search models.
            The first model has no cutoff at unpromising internal
            nodes, but does terminate at a leaf node when the
            leaf node represents a successful search outcome.
            This model leads to an average complexity that grows
            anywhere from linearly to exponentially in the depth
            of the tree depending on the probability of choosing
            the best branch to search first at each internal node
            of the tree.  Good decisions lead to linear
            complexity and bad decisions lead to exponential
            complexity.  The second model examines tree searching
            with internal cutoff when unpromising paths are
            discovered.  In this model, the search terminates
            successfully when it reaches the first leaf.  The
            model is representive of branch-and-bound algorithms
            that guarantee that the first leaf reached is a
            successful leaf.  Roth's D-algorithm for generating
            test vectors for logic circuits fits this model, and
            White's efficient algorithm for solving the Traveling
            Salesman problem also fits except for the
            distribution cutoff probabilities.  Our model shows
            that the number of nodes visited during a depth-first
            search grows at most linearly on the average,
            regardless of cutoff probability.  If cutoff
            probability is very high, the search fails with a
            very high probability, and visits an average number
            of nodes that grows as O(1) as the tree depth
            increases.  If cutoff probability is very low, then
            the algorithm finds a successful leaf after visiting
            only O(N) nodes on the average where N is the depth
            of tree.  Many NP-complete problems can be solved by
            depth-first searches.  If such problems can be solved
            by algorithms that order the depth-search first to
            terminate at the first leaf, then this work and the
            work by Smith suggests that the average complexity
            might grow only polynomially in the tree depth,
            rather than exponentially as the worst-case analysis
            suggests.

            H. S. Stone, IBM Yorktown Research
            Host:  B. D. Rathi

  Thurs., Dec. 6 Computer Science Seminar
  10:00 A.M.  APPLICATIONS OF COGNITIVE COMPLEXITY THEORY
  2C-012      TO THE DESIGN OF USER INTERFACES
            The cognitive complexity project has two major
            objectives.  The first is to gain a theoretical
            understanding of the knowledge and thought processes
            that underlie successful use of computer-based
            systems (e.g., text editors).  The second goal is to
            develop a design technology that minimizes the
            cognitive complexity of such systems as seen by the
            user.  Cognitive complexity is defined as the amount,
            content, and structure of the knowledge required to
            operate a system.  In this particular work, the
            knowledge is described as a production system.  The
            computer-based system is described as a generalized
            transition network.  Quantitative predictions,
            derived from the production system, are shown to
            account for various aspects of user performance
            (e.g., training time).  The talk will include a brief
            presentation of the design methodology based on the
            production system formalism.

            Prof. D. E. Kieras, University of Michigan, Ann Arbor
            Prof. P. G. Polson, University of Colorado, Boulder
            Host:  J. L. Bennett

------------------------------

Date: Wed 28 Nov 84 17:24:47-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - A Semantical Definition of Probability  (CSLI
         Stanford)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


            SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS

Speaker: Prof. Rolando Chuaqui, Catholic University of Chile and IMSSS
Title:   A Semantical Definition of Probability

Place:   Room 381-T, 1st floor Math. Corner
Time:    Monday, December 3, 4:15-5:30 p.m.

ABSTRACT:  The analysis proposed in this lecture is an attempt to formalize
both chance and degree of support.  Chance is considered as a dispositional
property of the objects plus the experimental conditions (i.e. what is
called the chance set-up).  Degree of support measures the support that the
evidence we have (i.e. what we accept as true) gives to propositions.
Chance, in this model, is determined by the set K of possible outcomes (or
results) of the chance set-up.  Each outcome is represented by a relational
structure of a certain kind.  This set of structures determines the algebra
of events, an algebra of subsets of K, and the probability measure through
invariance under a group of symmetries.  The propositions are represented
by the sentences of a formal language, and the probability of a sentence,
phi in K, P[K](phi), is the measure of the set of models of phi that are
in K.   P[K](phi) represents the degree of support of phi given K.  This
definition of probability can be applied to clarify the different methods
of statistical inference and decision theory.

------------------------------

Date: 27 November 1984 1607-EST
From: David Ackley@CMU-CS-A
Subject: Seminar - Learning in Stochastic Networks  (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

     "Learning evaluation functions in stochastic parallel networks"
                           Thesis Proposal
             Tuesday, December 4, 1984, at 3:30pm in 5409 WeH.

Although effective techniques exist for adjusting linear coefficients of
features to produce an improved heuristic evaluation of a game position,
the creation of useful features remains poorly understood.  Recent work
on parallel learning with the Boltzmann Machine suggests that the
creation of useful new features and the tuning of coefficients of
existing features can be integrated into a single learning process, but
the perceptual learning paradigm that underlies the Boltzmann Machine
formalism is substantially different from the reinforcement learning
paradigm that underlies most game-learning research.  The thesis work
will involve the development of a reinforcement-based parallel learning
algorithm that operates on a computational architecture similar to the
Boltzmann Machine, and drives the creation and refinement of an
evaluation function given only win/lose/draw reinforcement information
while playing a small game such as tic-tac-toe.  The thesis work will
test several novel ideas, and will have implications for a number of
issues in machine learning and knowledge representation.

------------------------------

End of AIList Digest
********************

∂01-Dec-84  2350	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #165    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 Dec 84  23:50:25 PST
Date: Fri 30 Nov 1984 21:55-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #165
To: AIList@SRI-AI


AIList Digest            Saturday, 1 Dec 1984     Volume 2 : Issue 165

Today's Topics:
  Planning - Bibliography Wanted,
  Cognition - Amnesia Before Age 5?,
  Administrivia - Number of Internet Users,
  News - AI in the News,
  Humor - Software Productivity,
  Seminars - User Interface Management System  (CMU) &
    Calculus of Partially-Ordered Type Structures  (MIT)
----------------------------------------------------------------------

Date: 30 Nov 84 15:07:23 PST (Fri)
From: Dan Shapiro <dan@aids-unix>
Subject: Planning Bibliography wanted

Does anyone know of an annotated biblography in the area of AI
planning?  My specific context is an autonomous land vehicle
project which involves generating a plan for traversing long
distances in essentially unrestricted terrain.  Issues in route
planning, real time planning, planning under uncertainty, planning
with multiple goals, goal conflict resolution strategies, etc.,
are all relevant.

I would also be interested in a reference list on the topic of
spatial reasoning, in particular the representation and
manipulation of symbolic features in maps or processed images.

I am going to be compiling/extending annotated bibliographies in
these areas; once done, I'd be glad to distribute them to anyone
who is interested.

                        Dan Shapiro
                        (dan@aids-unix)

------------------------------

Date: 29 Nov 84 15:22:05 EST (Thursday)
From: Chris Heiny <Heiny.henr@XEROX.ARPA>
Subject: Amnesia before age 5????

"..no one can remember events before the age of five."

What's going on here, anyway???   Does this mean that no one remembers
(during any part of their life) any events that occurred prior to age 5;
or does it mean that prior to age 5, one can't remember events occuring
during ages 0..4.99.  I personally can disprove the former: I remember
events that occurred when I was 3 & 4.  An acquaintance disproves both:
at age 3 she remembered an event several weeks after it occurred, and at
18 still remembers both the event and the remembering of the event (is
this a meta-memory?).

I think someone's confused....I hope it's not me.

                                        Chris

------------------------------

Date: Thu 29 Nov 84 14:10:35-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: Amnesia before age 5????

Granted, I was rather sweeping in my generalization.  Kids certainly
do remember events, but after growing up very few can remember more
than one or two vague incidents from the early years.  Even those
few memories are often the ones strengthened by parent's retelling
of the events.  At any rate, >>I<< have only two or three conscious
memories from pre-kindergarten days, and not a great many more from all
of grade school.

                                        -- Ken Laws

------------------------------

Date: 26 Nov 84 15:45 EST
From: WILLUT%EDUCOM.BITNET@Berkeley
Subject: Estimate on number of Internet users

Some belated facts related to estimates of Internet users:

BITNET currently has 328 machines at 117 sites (almost exclusively
universities), with 52 sites pending.  A stats program run recently at
a non-peak time determined that 150 nodes were up and 6,000 users logged in.

Also, MAILNET includes 24 universities (most single machines, but some
multiple-node sites, such as Carnegie-Mellon) that exchange mail with the
MAILNET hub (the MIT-MULTICS machine) via dial-up and/or Telenet connections.

Using the proposed estimate of 100-200 users per university machine that's
35-70,000 users.

Candy Willut
EDUCOM Networking Activities

------------------------------

Date: Thu, 15 Nov 84 05:12:53 cst
From: Laurence Leff <leff%smu.csnet@csnet-relay.arpa>
Subject: Recent AI News

[The following message from Laurence Leff at SMU was delayed somewhat
by mailer troubles.  He has offered to provide AIList readers with
references to AI articles in the non-AI press.  Such reviews and
alerts are certainly welcome.  -- KIL]


...
I currently provide this service to the AI group in the department and
it might be useful to others.  The journals I scan include

  Electronic Week
  Electronic News
  IEEE PAMI
  IEEE System Man and Cybernetics
  IEEE Computer
  Communications ACM
  Datamation
  Infoworld
  IEEE Spectrum
  IEEE Potentials

[Each notice] includes a citation (so people can find it) and usually
a sentence or two about contents.  Very short articles (<= 1 paragraph)
are usually typed in verbatim.



As if we didn't know department:

From Wall Street Journal

COMPUTERS THAT THINK like people create demand for experts in short supply.

Interest in "Artificial Intelligence" systems is booming, say employers
and recruiters among firms in financial services, computer hardware and
software design, defense and communications.  The systems principally
duplicate the thought processes of experts for trouble-shooting and cash
management.  Demand for systems is "explosive" says Halbrecht Associates,
Stamford, Conn.

But Halbrecht's recruiter Daryl Furno says "there just aren't enough
people to go around" to design the systems.  Most prospects have about five
job offers when they finish a project.  Christian & Timbers, a Cleveland
recruiting firm, says qualified experts demand 10%-20% premiums over most
computer designers.

DM Data, a Scottsdale, Ariz., consulting firm, estimates that there are
nearly 5,000 jobs in the industry now, but there may be 50,00 jobs
by 1990.


CACM 1984 - Vol 27 No. 10 page 1044:
Combination of PERT with [heuristic] search.


Byte Vol 9 No 11 October 1984 page 39:
Announcements of Tektronix AI system and TIMM expert system.


Byte Vol 9 No 11 October 1984 page 207:
Ad for IBM PC Common Lisp.


Electronic News Monday October 1, 1984 pp. 37:
Japanese-English translation-software article.


Copied from Computer Industry Update September 1984
IBM Company Announcements:

Announced a version of the Lisp programming language for the VM operating
system.  Lisp/VM is an integrated interactive invironment that provides a
collection of artificial intelligence programming tools.  A structure
editor displays the structure of all objects including programs, data
and results.  A variety of debugging tools are included.  The price is
$6500.

The firm also unveiled five other internal research and development
projects in artificial intelligence: the YES/MVS, an expert system
which runs on mainframe computers that use the MVS operating system;
PRISM, a system shell written in PASCAL for developers who wish to
insert their own rules and inferences for expert systems; Scratchpad
II which incorporates a system and language to provide facilities for
scientists to manipulate algebra directly on the computer screen; PSC
Prolog, a version of the Prolog programming language that operates on
the 370 and interfaces with the LISP/VM and SQL/VM relational DBMS and
the CMS Command Executive language REXX; and HANDY, a user interface
to AI systems and a PC-based program that includes elements of
windowing, color animation, graphics, speech synthesis and video
programs.


Electronic Weeks November 12, 1984:
Describes efforts of Sperry ($20,000,000 worth) to become leader in AI.  pp. 34


Electronics Week October 22, 1984:
Work by Kurzweill on solving the speech recognition techniques.
(Kurzweill was the developer of the text recognizer used to make a
reader for the blind.)  pp. 83


Infoworld November 5, 1984:
Review of "Into the Height [Heart? --KIL] of the Mind"
The review is oriented towards those not knowledgeable in AI.


IEEE Transactions on System Man and Cybernetics July/August 1984
    Volume SMC-14 Number 4:
Linguistic Representation of Default Values in Frames
  R. R. Yager pp 630
Approximate Reasoning as a Basis for Rule-Based Expert Systems
  R. R. Yager pp 636


Electronic News, Monday November 12, 1984:
Computer Thought ships ADA/ Interpreter Debugger on Symbolics 3600
  Machine pp 43


Electronic News, October 29, 1984:
Article on Marketing AI systems pp 34

------------------------------

Date: Thu 29 Nov 84 12:40:33-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Software Productivity

The November issue of IEEE Computer contains an Open Channel note fro
David Feinberg about the folly of programmer productivity metrics that
reward only lines of code and not lines of documentation.  This
suggests many lines of thought.

A lines-of-code metric penalizes those who write APL one-liners -- a
good thing, no?  We could increase the readability/maintainability of
programs if we passed them through a filter that would expand complex
expressions into simpler steps.  We could then increase our
productivity even further by converting these simple steps into more
complex operations.  The possibilities for bootstrapping are obvious,
although important research questions must be solved to eliminate
cycles in repeated transformations.  Fortunately, we only need to find
one example of unlimited software growth (coupled with our supercomputer
technology) in order to guarantee our world pre-eminence in software
productivity.

The same concept can be extended to hardware development to guarantee
our lead in computer complexity.  Progess in this direction has so far
been limited to computer support systems (e.g., F-15 aircraft), but
wafer-scale integration offers hope for further optimization.

This looks like a fruitful area for artificial intelligence research.
(Progress might be measured by published lines of proof or by reams
of suggestive hypotheses.)  I suggest that DARPA institute a crash
project to develop a prototype optimizing preprocessor able to convert

    x = y = 0;

into

    register t;

    t = 0;
    y = t;
    x = t;
    if (x != y)
      abend("Compiler and/or hardware error.");


Further breakthroughs will come quickly.  For instance, we might
substitute

     Ln (Lim (1+(1/z))↑z) + sin↑2(x) + cos↑2(x)
        z->INF
                   INF
                 - SUM (cosh(y sqrt(1-tanh↑2(y))/(2↑N)
                   N=0

for the constant 0 in the above program, providing that we can find
numerical methods of evaluating the limit and infinite summation
with adequate accuracy.  All that we need for rapid progress is a
sufficiently complex bureaucracy to support research and manage
distribution of the results.

                                        -- Ken Laws

------------------------------

Date: 29 Nov 84  1404 PST
From: Frank Yellin <FY@SU-AI.ARPA>
Subject: from the New Yorker

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


The following is from the Palm Springs Desert Sun (and reprinted word
for word in the New Yorker).

        "Controlling a plant," says Theodore J.  Williams, a researcher
    at Purdue University "takes a wider attention span than any one person
    could possibly have."  But with a distributed computer system, Mr.
    Williams added, "You can increase profitability, increase
    productivity, reduce raw materials and reduce emissions, because the
    computer system is flexible, process, rather than an entire plant.
    The system is flexible, allowess, rather than an entire plant.  The
    system is flexible, allowing anather than an entire plant.  The system
    is flexible, allowing an operator to rearrange a manufacturing process
    from his seat at the console.  "If you change your mind," said Robert
    E. Otto, a technical consultant at the Monsanto Co., "you don't have
    to rewire, you can just reprogram."

        And because the systhe central computer.  Then if something goes
    wrong ing back to the central computer.  Then if something goes wrong
    ing back to the central computer.  Then if something goes wrong wit
    back to the central computer.  Then if something goes wrong with the
    main cocentral computer.  Then if something goes wrong with the main
    control l computer.  Then if something goes wrong with the main
    control room your plant is O.K."

------------------------------

Date: 28 November 1984 1433-EST
From: Staci Quackenbush@CMU-CS-A
Subject: Seminar - User Interface Management System  (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

        Name:   Phil Hayes
        Date:   December 3, 1984
        Time:   3:30 - 4:30 p.m.
        Place:  WeH 5409

        Title:  "Design Alternatives for User Interface Management
                 Systems Based on Experience with the COUSIN System"


 User   interface  management  systems  (UIMSs)  provide  user  interfaces  to
 application  systems  based  on  an  abstract  definition  of  the  interface
 required.    This approach can provide higher-quality interfaces with a lower
 construction cost.  This talk examines a UIMS called  COUSIN  which  provides
 graphical  interfaces  to  a variety of application systems running on a Perq
 under the Accent operating system.  The presentation will include a videotape
 of a COUSIN interface.

 The talk will also take a more general look at the design  space  for  UIMSs.
 Specifically, we will consider three design choices.  The choices concern the
 sharing  of  control  between  the  UIMS  and  the  applications  it provides
 interfaces to, the level of abstraction in the definition of the  information
 exchanged  between  user and application, and the level of abstraction in the
 sequencing of information exchange.  For each choice, we argue for a specific
 alternative.  COUSIN's design corresponds to the alternatives we  argued  for
 in two out of three cases, and partially satisfies the third.

------------------------------

Date: Mon 26 Nov 84 16:37:16-EST
From: Susan Hardy <SH%MIT-XX@MIT-XX.ARPA>
Subject: Seminar - Calculus of Partially-Ordered Type Structures (MIT)

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

        A LATTICE-THEORETIC APPROACH TO COMPUTATION
BASED ON A CALCULUS OF PARTIALLY-ORDERED TYPE STRUCTURES

                     Hassan Ait-Kaci
  Microelectronics and Computer Technology Corporation
                      Austin, Texas


             DATE:    Friday, November 30, l984
             TIME:    2:00 p.m. - Talk
             PLACE:   NE43-512A

This talk will present a syntactic calculus of partially ordered
structures and its application to computation.  A syntax of record-
like terms and a "type subsumption" ordering are defined and shown
to form a lattice structure.  A simple "type-as-set"
interpretation of these term structures extends this lattice to
a distributive one, and in the case of finitary terms, to a
complete Brouwerian lattice.  As a result, a method for solving
systems of @i(type equations) by iterated substitution of type
symbols is proposed which defines an operational semantics
for KBL -- a Knowledge Base Language -- so-named to reflect
the original aim of this research; to wit, attempting a proper
formalization of the notion of "semantic network".

HOST:  Professor Rishiyur Nikhil

------------------------------

End of AIList Digest
********************

∂06-Dec-84  1139	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #166    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Dec 84  11:39:21 PST
Date: Fri 30 Nov 1984 22:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #166
To: AIList@SRI-AI


AIList Digest            Saturday, 1 Dec 1984     Volume 2 : Issue 166

Today's Topics:
  Administrivia - Remailing,
  Philosophy - Dialectics and Piaget,
  Logic Programming - Book Review,
  PhD Oral - Nonclausal Logic Programming,
  Seminar - Learning Theory and Natural Language  (MIT),
  Conference - Logics of Programs
----------------------------------------------------------------------

Date: Thu 6 Dec 84 09:20:51-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Lost Issue

It seems likely now that very few, if any, sites received this issue
on the first mailing.  I am therefore sending it out to all subscribers.
It has been gratifying to learn how many people just can't do without
an AIList issue, but you can all stop sending me messages about #166 now.

                                        -- Ken Laws

------------------------------

Date: 30 Nov 84 14:17:42 PST (Friday)
From: Rosenberg.PA@XEROX.ARPA
Subject: Dialectics and Piaget

Your summary of dialectics is quite nice, but your portrayal of Piaget
has a major error: Piaget was not a nativist, so it's unfair to lump him
together with, say, Kant.  (After all, Chomsky denounces him as an
empiricist!)  In fact, his constructivist genetic epistemology is
similar in many ways to the dialectical position you outlined (cf. his
books on negation and contradiction).

Jarrett Rosenberg

------------------------------

Date: 30 Nov 84 0059 EST (Friday)
From: Alex.Rudnicky@CMU-CS-A.ARPA
Subject: Piaget & dialectic

I would take issue with Bill Frawley's contention that Piaget's theory
is idealist in flavour.   If anything, it is essentially dialectical
in nature.  Piaget's work is often popularized in terms of his ``stages''
of intellectual development and their apparently immutable order.
His major contribution, however, is probably his elaboration of the
mechanisms by which this development could take place.  Specifically,
I would point to Piaget's concept of ``equilibration'', which can
(loosely) be described as the constant interaction between internal
cognitive structures and external events that results in modification
of internal structures.  Equilibrium is never quite reached, a state
that persists throughout an individual's life.  On the matter of
Piaget vs dialectics, I can offer the following quote:

"... in the domain of the sciences themselves structuralism has always
been linked with a constructivism from which the epithet "dialectical"
can hardly be withheld---the emphasis upon historical development,
opposition between contraries, and ``Aufhebungen'' (``de'passements'')
is surely just as characteristic of constructivism as of dialectic,
and that the idea of wholeness figures centrally in structuralist as
in dialectical modes of thought is obvious."  (Piaget, Structuralism,
1970, p.121).

------------------------------

Date: 30 Nov 84 09:46 PST
From: Newman.pasa@XEROX.ARPA
Subject: Re: Dialectics,   V2 #163

In reference to the recent posting on Dialectics, and in spite of the
fact that some of this has very little to do with AI.

Question: How does dialectics interact with the Heisenberg uncertainty
principle and other facets of quantum theory? It seems to me that the
idea of an interaction between the object and the observer which results
in some knowledge on the part of the observer might be an interesting
topic to discuss in terms of dialectics.

Comment: More in line with the basic topic of the digest, I think it is
obvious that there is some interaction between the observer and the
observed since psychology has shown that (to put it very simply) we see
and hear what we want to, and we don't notice what we wish to avoid.
However, this evidence and your arguments do not conclusively show that
Positivism is entirely wrong. Because I think that there are other
reasons to dismiss Behaviorism and I am not sure how Dialectics deals
with it, I will not deal with Behaviorism in this comment.

The best reason that I can think of on short notice for not dismissing
Positivism is that we must suppose that objects have some existence and
characteristics independent of the observer. I think that we would all
agree that there will be shock waves travelling through the air when the
tree falls in the forest, though we might disagree on whether this
constituted a sound (depending on the possible presence of an observer).
I am not sure what your position is on this issue, but my inclination is
that there is a position combining elements  of Dialecticism and
Positivism which is more aceptable than either of its parents.

Note that this is just an opinion since I don't have the time or
resources to do justice to the topic at the moment.

>>Dave

------------------------------

Date: 30 Nov 1984 04:55-EST
From: ISAACSON@USC-ISI.ARPA
Subject: Dialectics: Perils, and Promises for AI

Bill  Frawley  has  written a thought-provoking introduction  for  a
discussion  on  dialectics  [AIList v2  #  163,  11/29/84].   As  he
mentioned,  he applies dialectics in his work on Soviet theories  of
language  and  cognition, and studies the use of Soviet theories  to
explain language learning and text processing.   My own work relates
to  a  new  mode of information processing which is  dialectical  in
nature.   One of its applications is in Dialectical Image Processing
(DIP), reported in AIList v2 #153, 11/12/84.  It goes without saying
that   I  think  that  things  dialectical  are  crucial  to  things
intelligent.   But  before I proceed to elaborate    this  point  of
view,  I wish to caution the uninitiated,  and point out some of the
many perils of dialectics.

                      The Perils of Dialectics

"Dialectics" is basically an elusive, vague, and often controversial
and  misunderstood  term.   Its  origin is in antiquity  (Plato  and
Aristotle).   It attained prominence  and immense influence  through
the  German  idealism  of  the  early  nineteenth  century  (Fichte,
Schelling,   and, most notably Hegel) and has been transformed later
into  "dialectical materialism" by no other than Karl  Marx.   Major
American  philosophers  (notably C.  S.  Peirce) have  been  greatly
influenced by Hegelianism, and  significant Hegelian influences have
reached  as far as Japan (Nishida).   All in all,  huge segments  of
humanity  today  live under political philosophies,  or  ideologies,
that are dialectical at their roots in one way or another.   Through
it all, though, dialectics has remained elusive, unformalizable, and
-- in the view of many,  especially in the West -- unscientific  and
hence irrelevant to Western science.  A weird mixture of a method, a
(non-standard)  logic,  a  philosophy,  and  sometimes  a  political
ideology,  it  usually  baffles  the  Western  mind  and  hopelessly
frustrates  attempts to harness it in the interest of scientific  or
technological  objectives.   In  fact,  if  you wish to  dispose  of
dialectics  altogether,  you  are urged to read a  most  devastating
critique  by Karl Popper ("What is dialectics?" - Chap.  14) in  his
*Conjectures and Refutations* book.   Written many years  ago,  when
Marxist ideology seemed even more menacing than it is today,  Popper
shows  very little patience with "dialecticians" and portrays   them
as  a bunch of misguided cynics,  intellectual dwarfs,  and  pseudo-
scientific misfits.   And,  I should add, his points are not without
merit in many instances, and should not be ignored.

In  addition,   beyond philosophical and scholarly controversy   and
confusion,  there  always  looms  the  ideological/political  stigma
which  is usually attached to dialectics.   For it is the case  that
"dialectical materialism" has become the official dogma of  Marxism-
Leninism.   Much of Soviet science is constrained by their political
ideology,  and,  almost Pavolovian-style,  researchers are sometimes
rewarded for exhibiting "dialectical thinking" in their work.   Yet,
few Soviet scientific discoveries are known,  or recognized,  in the
West that owe their existence to dialectical foundations.   In other
words,  even  a  totalitarian society that  promotes,  and  rewards,
dialectical  thinking among its intellectuals has failed to  produce
significant scientific or technological results which are  genuinely
dialectical.  So, the questions should be asked:  What's really good
about  that dialectical stuff?   What's the hidden promise,  if any?
Why drag it into AI, our good old American AI?

                 The Promises of Dialectics for AI

The  answers are not easy to state,  and surely are incomplete here.
Bill Frawley gave his own sketchy rationale for adopting  dialectics
for certain language learning theories.   I am generally in sympathy
with  his  reaching  out for dialectics,  but my reasons  for  using
dialectics in AI are  more basic and,  admittedly,  almost  bizarre.
Having  an engineering background,  I never dreamt of using anything
as  remote  as dialectics for anything as   technically  mundane  as
image  processing.   It  so happened that,  for something like  five
years  (in the mid 60's) certain simple types of operations  yielded
imagery  that was "interesting" but unexpected and not  particularly
meaningful  or  interpretable.   Only  after  the  fact,  and  after
outsiders  had  been consulted,  it has become clearer  (and  later
obvious!)  that  what that type of image processing  was  doing  was
Hegelian  dialectics,  pure and simple.   All in all,  that exercise
took  some  twenty  years.   In other  words,  we've  learned  about
dialectics from the machine,  rather than have had any  premeditated
intention to program the machine to do dialectics!  Put another way,
the  machine  had been doing dialectics for us for some five  years,
well before we ever heard the term for the first time.  Well, twenty
years is certainly a long time,  and serious study of dialectics and
its ramifications has led, little-by-little, to the realization that
its  application in the implementation of certain intelligent  tasks
is  potentially  very  powerful.   The  reality  of  an  implemented
"dialectical  machine" then took hold and has opened  up  tremendous
possibilities.

To  put  it  all in very simple terms.   We on  this  project  don't
particularly  care  for Hegelian philosophy,  nor do  we care  about
Marxist  ideology.   Here  is  a machine that,  of its  own  accord,
behaves  in  a  classical dialectical  mode.   While  doing  so,  it
processes  images in an unusual (non-programmed) way that is  useful
in  machine-vision.   And  there are clear  indications  that  other
applications in other machine-intelligence domains are feasible, and
we hope to hear from others about those in this forum.   Anyway,  we
think  that  the promise of dialectics for AI clearly outweighs  its
traditional perils,   and recommend that people consider the  issues
and ramifications involved.

-- J. D. Isaacson

------------------------------

Date: Wed, 21 Nov 84 13:03:28 EST
From: Anonymous
Subject: Foundations of Logic Programming

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]


                   Foundations of Logic Programming

                          J.W. Lloyd

                  Springer-Verlag,ISBN 3-540-13299-6


This is the first book to give an account of the mathematical
foundations of Logic Programming.  Its purpose is to collect,
in a unified and comprehensive manner, the basic theoretical
results of Logic Programming, which have previously only been
available in widely scattered research papers.

The book is intended to be self-contained, the only prerequisites
being some familiarity with Prolog and knowledge of some basic
undergraduate mathematics.

As well as presenting the technical results, the book also
contains many illustrative examples and a list of problems
at the end of each chapter.  Many of the examples and problems
are part of the folklore of Logic Programming and are not easily
obtainable elsewhere.

                             CONTENTS

Chapter 1. DECLARATIVE SEMANTICS
           section 1.  Introduction
           section 2.  Logic programs
           section 3.  Models of logic programs
           section 4.  Answer substitutions
           section 5.  Fixpoints
           section 6.  Least Herbrand model
                   Problems for chapter 1

Chapter 2. PROCEDURAL SEMANTICS
           section 7.  Soundness of SLD-resolution
           section 8.  Completeness of SLD-resolution
           section 9.  Independence of the computation rule
           section 10. SLD-refutation procedures
           section 11. Cuts
                   Problems for chapter 2

Chapter 3. NEGATION
           section 12. Negative information
           section 13. Finite failure
           section 14. Programming with the completion
           section 15. Soundness of the negation as failure rule
           section 16. Completeness of the negation as failure rule
                   Problems for chapter 3

Chapter 4. PERPETUAL PROCESSES
           section 17. Complete Herbrand interpretations
           section 18. Properties of T'
           section 19. Semantics of perpetual processes
                   Problems for chapter 4

------------------------------

Date: 29 Nov 84  0255 PST
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: PhD Oral - Nonclausal Logic Programming

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Monday 3 December, 1984, 2:15pm, 146 MJH
PhD Orals
Yoni Malachi


                       Nonclausal Logic Programming


The Tableau Programming Language (Tablog) is based on the Manna-Waldinger
deductive-tableau proof system and combines advantages of Prolog and Lisp.  A
program in Tablog is a list of formulas in [quantifier-free] first-order logic
with equality and is usually more natural than the corresponding program in
either Lisp or Prolog.

The inclusion of equivalence, negation, conditionals, functions, and equality
in Tablog enables the programmer to combine functional and relational
programming in the same framework.  Unification is used as the binding
mechanism and makes it convenient to pass unbound variables to a program and
to manipulate partially computed objects.

The tableau proof system is employed as an interpreter for the language in the
same way that a resolution proof system serves as an interpreter for Prolog.
The basic rules of inference used in the system are: nonclausal resolution,
equational rewriting, and replacement of formulas by equivalent ones.

This work describe Tablog and its semantics.  In addition to the simple
declarative (logical) semantics of the language, a procedural interpretation
is presented for sequential and parallel models of computation.  Various
properties of the language are studied and the language is compared to Lisp
and Prolog and to other combinations of functional and logic programming.

------------------------------

Date: 29 Nov 1984  14:50 EST (Thu)
From: "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Learning Theory and Natural Language  (MIT)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                 Language and Learning Seminar Series


                           Scott Weinstein

                      University of Pennsylvania
                                 and
                  Center for Cognitive Science, MIT


               ``LEARNING THEORY AND NATURAL LANGUAGE''


                      Tuesday, December 4, 2 PM
                            A.I. Playroom
                   8th floor, 545 Technology Square


Formal learning theory may be conceived as a means of relating
theories of comparative grammar to studies of linguistic development.
After a brief review of relevant concepts, the present talk surveys
formal results within Learning Theory that suggest corresponding
constraints on linguistic theory.  Particular attention is devoted to
the question: How many possible natural languages are there?

Host: Prof. Robert C. Berwick


Refreshments at 1:30

------------------------------

Date: 25 Nov 84 1146 EST (Sunday)
From: Edmund.Clarke@CMU-CS-A.ARPA
Subject: Logics of Programs Call for Papers

                      CALL FOR PAPERS
                   Logics of Programs 1985

The Workshop on Logics of Programs 1985, sponsored by Brooklyn College
and IBM Corporation, will be held Monday, June 17 through Wednesday,
June 19, at Brooklyn College in Brooklyn, New York.  Papers presenting
original research on logic of programs, program semantics, and program
verification are being sought.

Typical, but not exclusive, topics of interest include:  syntatic and
semantic description of new formal systems relevant to computation,
proof theory, comparative studies of expressive power, programming
language semantics, specification languages, type theory, model theory,
complexity of decision procedures, techniques for probabilistic,
concurrent, or hardware verification.  Demonstrations of working systems
are especially invited.

Authors are requested to submit 9 copies of a detailed abstract (not a
full paper) to the program chairman:

          Professor Rohit Parikh
          Logics of Programs '85
          Department of Computer and Information Science
          Brooklyn College
          Brooklyn, New York  11210

Abstracts should be 6 to 10 pages double-spaced, and must be received no
later than January 14, 1985.  Authors will be notified of acceptance or
rejection by February 18, 1985.  A copy of each accepted paper, typed on
special forms for inclusion in the proceedings, will be due on March 24, 1985.

------------------------------

End of AIList Digest
********************

∂02-Dec-84  1843	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #167    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 Dec 84  18:41:13 PST
Date: Sun  2 Dec 1984 15:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #167
To: AIList@SRI-AI


AIList Digest             Sunday, 2 Dec 1984      Volume 2 : Issue 167

Today's Topics:
  Administrivia - Special Net.AI Issues for Arpanet Readers,
  Linguistics - Language Deficiencies & Translation Difficulties
----------------------------------------------------------------------

Date: Sun 2 Dec 84 16:04:11-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Special Net.AI Issues for Arpanet Readers

Laurence Leff of SMU has sent me the Usenet Net.AI record for
the period since my Usenet gateway has been down.  (I.e., since
October 23.)  I will pass the Usenet messages along to Arpanet
readers in three special issues.  This first one includes a
discussion of linguistics and translation difficulties.  The
next issue will include related material about the influence
of language on thought.  The third will be a miscellany issue
containing nonlinguistic material.

					-- Ken Laws

------------------------------

Date: 1:18 pm  Oct 23, 1984
From: colonel@gloria
Subject: natural language deficiencies?

> This struck a chord.  I remember a PBS TV show about the Australian
> aborigines and the difficulties studying them.  There is apparently no
> way to phrase "what if" types of questions.  The anthropologists had to
> tell them a thing was so, get their response, and then tell them it was
> not so.
> 
> This would seem to me to be a serious "expressive deficit".  Any
> aborigines on the net care to verify this?

	A general semanticist named Harrington whose first name
	I have forgotten said that he knew an Indian who was
	fluent in his tribal language and also in ours.  Harring-
	ton asked the Indian if there were such words (meanings)
	as "could" and "should" in his Indian language.  The
	Indian was quiet for a while, then shook his head.  "No,"
	he said.  "Things just are."

			Barry Stevens, ←Don't Push the River← (1970)

Expressive deficiency?  Or a more accurate modeling of reality?

See also the "Counterfactuals" dialogue in Hofstadter's ←Godel, Escher,
Bach.←

Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 10:10 am  Oct 25, 1984  
From: dan@aplvax
Subject: Tenses in Hopi

It is well-known that the Hopi (American Indian) language only has a
present tense, there are no past or future tenses for their verbs.
Surely this is a language deficiency.

------------------------------

Date: 2:27 pm  Oct 26, 1984  
From: mmt@dciem
Subject: Tenses in Hopi

  It is well-known that the Hopi (American Indian) language [...]

If I remember correctly, Whorf pointed out that the Hopi don't really
have verbs.  Rather, they differentiate between events that last longer
than a cloud (nouns) and shorter events (verbs). Presumably they also
distinguish between events you know about (past+present[which is now past
because you are talking about it]) and events you don't know about
(counterfactuals and/or future).  Does anyone know more directly about
this?
The nature of the Hopi verb/noun tense/factual distinction is interesting
because Whorf used the non-distinction between noun and verb to
argue that the Hopi probably see the world in a different way.

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsrgv!dciem!mmt

------------------------------

Date: 1:06 pm  Oct 27, 1984  
From: steven@mcvax
Subject: Language Deficiencies

I find this talk of 'deficiencies' a little disturbing.

A deficiency is in the ear of the listener, surely. If a language doesn't
have a particular feature, then that is only because the speakers of that
language don't need it. If they perceived a need for it, something would
develop.

As an example, 'standard' English doesn't distinguish between 'you' singular
and plural, while many languages do. Is this a deficiency of English? Most
English speakers would probably say not because they get along fine as it is.
However certain dialects of English apparently found it a deficiency, because
they went and invented a plural version (y'all in USA, youse in England).

A similar example is the difficulty in English of saying something in a
gender-neutral way (Chinese has a single word for 'he or she' for instance).
Many English speakers find this a deficiency, and so are developing ways to
express these things.

------------------------------

Date: 7:32 am  Oct 28, 1984  
From: malcolm@west44
Subject: "Youse"

Since when has the word "youse" been used in England (or even Great Britain)?

------------------------------

Date: 7:40 am  Oct 28, 1984  
From: dick@tjalk
Subject: Language Deficiencies

>
>	From: dan@aplvax.UUCP (Daniel M. Sunday)
>
>	It is well-known that the Hopi (American Indian) language [...]

It is well-known that the English language only has a genderless
substantive, there are no masculine or feminine forms for their
substantives.  Surely this is a language deficiency.

It is well-known that the English language only has a sizeless
substantive, there is no diminuitive form for their substantives.
Surely this is a language deficiency.

There is no (reasonable) way to render Dutch: leraresje (little female teacher)
into English.

					Dick Grune
					Vrije Universiteit
					Amsterdam
and my name isn't Richard.

------------------------------

Date: 7:40 am  Oct 28, 1984  
From: steven@mcvax
Subject: Translation of Dutch

> There is no (reasonable) way to render Dutch: leraresje (little female
> teacher) into English.

There is no way, reasonable or not, to render Dutch 'gezellig' into English.
This is also SURELY a language deficiency.

(Since there's no way to render the word into English, I'm afraid I can't
explain to non-Dutch speakers what it means, except to say that it's an
adjective describing social situations, and is desirable.)

((For Dutch readers: I find the same problems with 'eng', though it's not so
widely discussed as gezellig. But perhaps discussion on that should be
restricted to nlnet distribution.))

------------------------------

Date: 6:32 pm  Oct 29, 1984  
From: rob@ptsfa
Subject: Language Deficiencies

> It is well-known that the Hopi (American Indian) language only has a
> present tense, there are no past or future tenses for their verbs.
> Surely this is a language deficiency.

Similarly Indonesian does not have tenses either (nor aspect or person
or number).
However, the meanings that tenses, etc. express in English et al. get expressed
with separate words in Indonesian. In fact English doesn't even have a real
future tense, e.g. no prefix/suffix added to verb root to denote future;
English uses a separate word 'will' to denote futurity, as well as phrases
like 'be going to'.
Indonesian has a whole battery of adverbs to take the place of verb tense.

The lack of a syntactic feature does not necessarily mean a communicative
deficiency. And in any case it is not clear that if a language cannot
communicate some certain meaning it is deficient - maybe the native speakers
of that language have no need to express that meaning.
Do Congolese Pigmies need to have a word for snow? Actually that's a slightly
different issue than tense, because 'snow' is an object whereas tense is
has a more abstract significance.

Rob Bernardo, Pacific Bell, San Francisco, California
{ihnp4,ucbvax,cbosgd,decwrl,amd70,fortune,zehntel}!dual!ptsfa!pbauae!rob

------------------------------

Date: 7:24 pm  Oct 29, 1984  
From: lwall@sdcrdcf
Subject: Language Deficiencies

In article <6115@mcvax.UUCP> steven@mcvax.UUCP (Steven Pemberton) writes:

>I find this talk of 'deficiencies' a little disturbing.
>
>A deficiency is in the ear of the listener, surely. If a language doesn't
>have a particular feature, then that is only because the speakers of that
>language don't need it. If they perceived a need for it, something would
>develop.

I find this talk of deficiencies a little disturbing too, but for different
reasons.  Almost all purported "deficiencies" indicate not that a language
cannot communicate a particular idea, but that the purported linguist has
not studied the language well enough.  Languages are not differentiated on
the basis of what is possible or impossible to say, but on the basis of what
is easier or harder to say.  That is not to say that a given language is
easier or harder than another--languages on the whole are of approximately
equal complexity, but the complexities show up in different places in
different languages.  This is known as the waterbed theory of linguistics--
you push it down one place and it pops up somewhere else.

>As an example, 'standard' English doesn't distinguish between 'you' singular
>and plural, while many languages do. Is this a deficiency of English? Most
>English speakers would probably say not because they get along fine as it is.
>However certain dialects of English apparently found it a deficiency, because
>they went and invented a plural version (y'all in USA, youse in England).

Here in California, it's "you guys".  And no, they don't all have to be male.
They don't any of them have to be male.

Of course, "standard" English has "all of you", "you folks", "you ladies",
etc., and a bunch of vocative phrases to indicate plurality.  "Gentlemen,
start your engines!"

>A similar example is the difficulty in English of saying something in a
>gender-neutral way (Chinese has a single word for 'he or she' for instance).
>Many English speakers find this a deficiency, and so are developing ways to
>express these things.

One does have a certain amount of difficulty, doesn't one?  But just because
an English speaker runs up against this problem, it doesn't mean they have to
reinvent the wheel, do they?  English already has both a formal and an
informal way to express the idea.  One doesn't have to misunderstood if they
don't want to.  Of course, if one mixes up the formal with the informal, they
very well might be misunderstood.

(For you clunches out there, the previous paragraph is self-referential.)

Larry Wall
{allegra,burdvax,cbosgd,hplabs,ihnp4,sdcsvax}!sdcrdcf!lwall

------------------------------

Date: 4:55 pm  Oct 30, 1984  
From: mmt@dciem
Subject: Translation of Dutch

> There is no way, reasonable or not, to render Dutch 'gezellig' into English.
> This is also SURELY a language deficiency.

  (Since there's no way to render the word into English, I'm afraid I can't
  explain to non-Dutch speakers what it means, except to say that it's an
  adjective describing social situations, and is desirable.)

Why is there *no* way?  Do you mean to imply that English-speakers cannot
experience this social situation, or just that it would take a complex
phrase or paragraph to get the idea across.  If the former, then there
must be more difference between the Dutch culture and all English-speaking
ones than I have observed.  If the latter, then why not try and see
where you get.  I was under the impression that "gezellig" was close
to cosy, comfortable, unconstrained and home-like.  Is this anything like?

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsrgv!dciem!mmt

------------------------------

Date: 7:47 am  Oct 31, 1984  
From: marcus@pyuxt
Subject: Translation of Dutch

Does gezellig mean the same as the german word "gemutlich"? ('Skuse the
spelling, please, but I'm not a German speaker, or even a speaker of
German).
		marcus hand

Incidentally,  I think its usually a deficiency in the speaker or writer
rather than the language....

------------------------------

Date: 7:33 pm  Nov  2, 1984  
From: lambert@mcvax
Subject: Language Deficiencies

[warn your system administrator if this line is missing]

> I think that there are two issues mixed up at the moment, being
> 1. Some languages have a single word-construction for an idea
>    that needs several words in some other language.
> 2. Some languages *CAN NOT* be used to express certain ideas.

The distinction between these two categories is not an absolute one. Steven
Pemberton mentioned already the Dutch word "gezelligheid".  No doubt it is
possible to explain the meaning of the word "gezellig" and its derivatives in
English.  To do so, however, to a reasonable degree of precision (let alone
to a degree of precision that would suffice for non-native speakers to rely
on their understanding and utter these words when and only when appropriate)
would require a minor essay.  Now these words are not at all infrequently
used in Dutch.  My dictionary lists as translations for "gezellig":
"sociable", "cosy", "snug" and "social".  A "gezellig avondje" is rendered as
a "social evening".  In the direction English -> Dutch this is always
reasonable.  But telling the host that the evening was "gezellig" would be
considered a compliment, whereas stating that it was social sounds like a
superfluous statement of fact.  Translating "gezellig" as "cosy" is usually
not only wrong, but also ridiculous.  When I try to express myself in English
where I would have used "gezellig" in Dutch, I usually substitute "nice".
However, "nice" does not really convey the meaning of what I am trying to
say.  I experience this as a language deficiency.

Another example is the Dutch phrase "voor de hand liggen".  There is no
phrase in English with the same meaning.  In some cases, "to be obvious" is
acceptable, in some other cases one can use "to come to mind", but in many
cases both are plainly wrong, and in those cases there is no *reasonable* way
that I know of to express the concept in English.

> On the other hand, the Aborigines have no construction for 'what if',
> which is much more serious. This really is a language deficiency,
> since it will take *lots* of trouble to communicate this idea.

Having no construction for a concept is not a property of a race or ethnic
group, but of a language.  There are many Australic languages.  Is the lack
of expressibility of "what if" common to all these, mutually largely
disparate, languages?  That would be a very interesting fact to find.
(However, it appears that none of these languages can express the concept
"supply-side economics" :-) Seriously, I don't know any of the Australic
languages, but I am not at all convinced that natural languages do exist in
which it is hard to express the fact that something has the status of a
hypothesis, even though the language may lack a word for the concept
"hypothesis".  This claim about the languages spoken by the Aborigines seems
to me just one more unfounded popular belief similar to so many introduced by
travellers to uncharted areas while recounting their curious discoveries.  If
it is true, however, for some language, then this would be a good test case
for the Sapir-Whorf hypothesis.  For the implication would be that the native
speakers could not entertain hypothetical thoughts, and so would not take
provisions for contingencies.

To conclude, I want to point out two deficiencies common to all languages I
know.  The first is well known: what should you reply to the question "Do you
still persist in your lies?", when you believe you are speaking the truth?

There is no way of stating that the question implies a falsehood other then
by directly contradicting the falsehood.  On paper, "Question not applicable"
may do, but not in a conversation.  The other deficiency has to do with "why"
questions.  Children tend to pass through a period of asking questions like:
"Why are bananas yellow?" "Why does water not burn?"  "Why is ice cold?"
etc., ad nauseam.  In some cases there is no "why"; the concept does not
apply.  For example, it is not reasonable to ask "Why is it Wednesday
today?", or "Why is red a colour?".  The deficiency is that there is no
accepted way of stating about a proposition that the concept "why" does not
apply.

     Lambert Meertens
     ...!{seismo,philabs,decvax}!lambert@mcvax.UUCP
     CWI (Centre for Mathematics and Computer Science), Amsterdam

"If I were you, I should wish I were me."

------------------------------

Date: 7:34 pm  Nov  2, 1984  
From: steven@mcvax
Subject: "Youse"

In article <382@west44.UUCP> malcolm@west44.UUCP (Malcolm Shute.) asks:

> Since when has the word "youse" been used in England (or even Great Britain)?

Well, the earliest date I can't give you. However, it was recorded in
Norfolk, for instance, in 1905. As for Great Britain, I can find references to
1880, and possibly earlier, in Northern Ireland. However, since it is also
recorded in Australia and the USA, it probably derives from much earlier.

------------------------------

Date: 2:43 am  Nov  4, 1984  
From: biep@klipper
Subject: Translation of Dutch

In article <1175@dciem.UUCP> mmt@dciem.UUCP (Martin Taylor) writes:

>I was under the impression that "gezellig" was close
>to cosy, comfortable, unconstrained and home-like.  Is this anything like?

	I wouldn't say it is "close to" the words you mentioned.
	It often is, but it isn't that. E.g. it can suddenly be
	"gezellig" when one of two people on an inhabitated is-
	land, suddenly reveals a bar of chocolate and shares it
	with his companion. They may be almost starving, but
	they eat it with little bits, and talk about the taste,
	and where, in which shop ("You remember, the old man
	who used to buy licorice over there?"), one can buy
	the best, etc.
	My English isn't that good, but the whole situation
	doesn't sound like "cosy", or "home-like", or such. The
	Dutch word "gezellig" is derived from the same stem as
	"gezelschap", which means both "the group around you"
	and "the mutual affection within the group". However,
	it has got a special meaning too because of the fact
	that the word is often used with respect to going and
	drinking coffee together at eleven o'clock in the mor-
	ning. (The word "coffee" itself is highly associated
	with "gezellig" too: I don't drink coffee, but nobody
	would invite me "Come, and drink chocolate milk with
	us!", however that is what I actually do. The word
	"coffee" *has* to be mentioned to commumicate the
	idea. The Dutch expression for "Our house stands always
	open for you" is "The coffee is always ready for you".)

							  Biep.
	{seismo|decvax|philabs}!mcvax!vu44!botter!klipper!biep

I utterly disagree with everything you are saying, but I am
prepared to fight myself to death for your right to say it.
							--Voltaire

------------------------------

Date: 5:10 pm  Nov  4, 1984  
From: ir44@sdcc6
Subject: Language Deficiencies

> 
> > I think that there are two issues mixed up at the moment, being
> > 1. Some languages have a single word-construction for an idea
> >    that needs several words in some other language.
> > 2. Some languages *CAN NOT* be used to express certain ideas.
> 
> The distinction between these two categories is not an absolute one. 

There are further problems in the comparison of languages and
their semantic capabilities that become evident in this series
of articles on "deficiencies." 
   1. The discussion of Dutch "gezellig" illustrates the
   difficulty of defining a word (more for some words than
   others) in its OWN language, let alone translating it, i.e.,
   finding a single or compact phrase that conveys its meaning
   to speakers of another language. The problems of definition
   and translation appear to be similar and always approximate.
   One test (of distribution) is whether a proposed synonym or
   defining phrase or circumlocution can be substituted for the
   original word over the whole range of environments in which
   that word can occur. Under this test there are few true 
   synonyms within a language let alone single word translations
   in the target language. In translation the test is doubly
   approximate as the environment in which a term occurs are
   themselves approximate translations, themselves environed by
   the word being tested. I have spoken to Bible translators, now
   so widespread in the world, about how they translate such
   notions as "God" or "hell." They do their best, ignore the
   incommensurabilities, and rely on God or "God" to get his
   point across.

   2. The notion of "word" in my inexpert opinion is one of the
   most loosely defined in linguistics. Sometimes it is taken
   as a unit that can occur by itself (unlike an affix which,
   while it can occur independently, with many different roots,
   is a bound morpheme that would not occur by itself unless 
   it has been liberated, like "isms and ologies.") But much of
   what we take as words in English are, I think, only separated
   as orthographic conventions, not occuring separately as 
   utterances in speech-- compare "am" with "-ing". The sense
   of "wordness" may be more semantic than syntactic or perhaps
   more a matter of cognitive chunking. The question of what 
   makes a good dictionary entry may have its counterpart in the
   storage of vocabulary- "word" being in some way the best
   retrieval unit. 

   Ted Schwartz    Anthro/UCSD

------------------------------

End of AIList Digest
********************

∂02-Dec-84  2016	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #168    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 Dec 84  20:15:59 PST
Date: Sun  2 Dec 1984 16:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #168
To: AIList@SRI-AI


AIList Digest             Sunday, 2 Dec 1984      Volume 2 : Issue 168

Today's Topics:
  Perception - Language and Thought
----------------------------------------------------------------------

Date: 4:55 pm  Nov 10, 1984  
From: dts@gitpyr
Subject: Language and Thought

...

> The lack of a syntactic feature does not necessarily mean a communicative
> deficiency. And in any case it is not clear that if a language cannot
> communicate some certain meaning it is deficient - maybe the native speakers
> of that language have no need to express that meaning.

I don't take it as given that there exist any concepts that some language
can't express, because I'm not sure what it means to say that a language
"can't express" an idea. One thing that most people in this discussion
seem to have overlooked is the fact that the words don't carry all the
meaning.

The words you are reading now are arousing ideas in your mind. I have no
direct control over those ideas. All I can do is try to chose my words
so that they will evoke the ideas I want them to in the minds of the
majority of those people who bother to read this. If you fail to properly
understand what I am trying to say, whose fault is it? Mine for choosing
the wrong words? Yours for having the wrong ideas? English's for not
having a single word which encompasses everything I'm trying to say?

I've had discussions on this topic before with friends, in which I took
the position that there are things that can't be expressed in English.
But now I think that's a naive viewpoint because so much depends on
mutual understanding between the persons involved. I asked a Dutch
person about "gezellig" and she explained it so that I think I
understand. The closest single-word synonym I could think of in English
is "homey" but that's not really anywhere near being an exact equivalent.

But now, if someone said to me, "Homey. You know, in the Dutch sense,"
I would have a good idea of what they meant. English will have
communicated an idea that many people on the net have been saying it
can't.

-- Either Argle-Bargle IV or someone else. --

Danny Sharpe
School of ICS
Georgia Insitute of Technology, Atlanta Georgia, 30332
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!dts

------------------------------

Date: 1:00 pm  Nov  9, 1984
From: arndt@decwrl
Subject: The Soapy-Woof theory of talk.


It seems to me that there is a hole at the bottom of the bag.
I mean, does language really have THAT much control over how we think?

"Language exists to communicate whatever it can communicate.  Some things
it communicates so badly that we never attempt to communicate them by words
if any other medium is available."

". . . what language can hardly do at all, and never does well, is to inform
us about complex physical shapes and movements.  Hence descriptions of such
things in the ancient writers are nearly always unintelligible.  Hence in
real life we never voluntarily use language for this purpose; we draw a
diagram or go through pantomimic gestures."

"Another grave limitation of language is that it cannot, like music or
gesture, do more than one thing at once.  However the words in a great poet's
phrase interinanimate one another and strike the mind as a quasi-instantaneous
chord, yet, strictly speaking, each word must be read or heard before the next.
That way, language is unilinear as time.  Hence, in narrative, the great 
difficulty of presenting a very complicated change which happens suddenly.
If we do justice to the complexity, the time the reader must take over the 
passage will destroy the feeling of suddenness.  If we get in the suddenness
we shall not be able to get in the complexity.  I am not saying that a genius
will not find its own ways of palliating this defect in the instrument; only
that the instrument is in this way defective."

"One of the most important and effective uses of language is the emotional.
It is also, of course, wholly legitimate.  We do not talk only in order to
reason or to inform.  We have to make love, and quarrel, to propitiate
and pardon, to rebuke, console, intercede, and arouse.  The real objection
lies not against the language of emotions as such, but against language 
which, being in reality emotional, masquerades - whether by plain hypocrisy or
subtler self-deceit - as being something else."

From:  C.S. Lewis, STUDIES IN WORDS, Cambridge University Press, 1960.
       Chapter 9 "At The Fringe Of Language, p.214-5.

Comments???????????????????

Regards,

Ken Arndt

------------------------------

Date: 7:21 am  Nov 12, 1984  
From: robison@eosp1
Subject: Perception

I disagree strongly wth the C.S. Lewis quote below (from ken Arndt).

>"Another grave limitation of language is that it cannot, like music or
>gesture, do more thatn one thing at once.  However the words in a great poet's
>phrase interinanimate one another and strike the mind as a quasi-instantaneous
>chord, yet, strictly speaking, each word must be read or heard before the
>next. That way, language is unilinear as time. Hence, in narrative, the great 
>difficulty of presenting a very complicated change which happens suddenly.
>If we do justice to the complexity, the time the reader must take over the 
>passage will destroy the feeling of suddenness.  If we get in the suddenness
>we shall not be able to get in the complexity.  I am not saying thta genius
>will not find its own ways of palliating this defect in the instrument; only
>that the instrument is in this way defective."
>
>From:  C.S. Lewis, STUDIES IN WORDS, Cambridge University Press, 1960.
>       Chapter 9 "At The Fringe Of Language, p.214-5.

All arts that appeal primarily to one sense suffer to a degree from
the fault Lewis describes, that one item of information is processed at
a time, and the artwork is perceived serially in a sense.  Almost all
great artists in all media have wonderful ways of addressing this
problem, so that it is not a limitation, but merely a challenge.
In the specific example, the words of poems particularly tend to have
multiple meanings, and to give additional meanings to other parts of
the poem.  Even if one focuses on the INITIAL reading of a poem
(which is ridiculous), the words already read will continually change
in perception as additional words are read.  This is a heavy parallel
activity!

Other examples one might give:

  In writing, many authors contrive to describe a complicated sudden
  change obscurely, so that the reader knows he does not understand the
  words fully in his serial reading, but the entire complex moment may
  be understood suddenly when, after many pages, the whole situation
  falls into place.  I'm sure we can all think of books where this
  occurs.  For spectacular, but easy examples of this I would recommend
  the beginning (say, the first 15 pages) of either of these novels by
  Henry Green:
	- Living
	- Party Going
  In each case, he starts by partially describing the current situation
  in such an uncommunicative manner that the reader is all at sea.
  Conversation, observation, and environment just accumulate in the
  readers mind, awaiting elucidation.  Then orientation occurs, the
  meaning of the opening pages hits the reader in a rush, and he is
  emotionally deep in the fabric of the book, having been struck by
  a torrent of words suddenly, in a way C.S. Lewis would have thought
  impossible...

  Painters and similar artists know that the eye perceives a picture
  serially.  Most types of art attract the eye (not 100%, but
  materially) to a part of the picture, and then lead it from place to
  place.  Many pictures are arranged so that the actual motion of the
  eye will be soothing or otherwise.  Some pictures are arranged so
  that a surprise awaits the eye after part of the picture is
  perceived.  [In Western Art, landscapes that slope down from left
  to right tend to be more soothing than the reverse, since Western
  eyes tend to read from left to right.  Some pictures just lead the
  eye round and round through an unsettling maze, as Picasso's
  Guernica.]

  Musical compositions are heard serially.  Again, if we focus on the
  initial hearing, musical ideas are being presented serially, with
  a minimum of parallelism possible.  But as a composition goes on,
  the listener learns more about, and re-interprets, what he has heard.
  An obvious example would be a theme and variations, in which some of
  the variations emphasize constructional characteristics of the theme,
  and some recall the theme so the listener can rethink its impression
  on the basis of better understanding of its parts.  These variations
  will be communicating in parallel (what happened before, plus the new
  variation itself).

  Three-dimensional sculptures must also be perceived over time, since
  they are not fully visible from one place.  Mnay sculpors are aware
  of this and arrange that the whole is greater than the sum of its
  parts.

  Etcetera, etcetera, etcetera.

------------------------------

Date: 10:26 am  Nov 14, 1984  
From: ben@sysvis
Subject: Perception

	Interesting.  (But why is this in net.ai instead of net.lang.n?)
	Language as an informational tool, especially when in written form, 
	seems to have some distinct disadvantages in terms of information
	density.  When describing a house, for instance, it is certainly 
	more informative to draw a floor plan, with dimensions, and provide
	architectural renderings in color, than to give a verbal description.

	However, the emotional impact of being present in a building itself
	cannot be conveyed by graphic or pictorial means alone.  If you visit
	the Vietnam War Memorial in Washington, it is a moving experience.
	However, the photograph you bring back cannot convey the emotion you
	experienced.  It will arouse emotional reactions in your viewers, but
	not necessarily the emotions you wished to convey.
	
	To a limited extent, written language together with graphic and pic-
	toral information will provide the emotional base for communication.
	Spoken language, with all its intonational coloring, will convey much
	more of the emotion.  These combined with musical score will allow
	you as a communicator to most closely recreate the experience both 
	informationally and emotionally for your audience.  Thus the basis for
	this combination in cinema and video.

					Ben Evans
					{ctvax!convex}!trsvax!sysvis!ben

------------------------------

Date: 5:02 pm  Nov 17, 1984  
From: mark@digi-g
Subject: Language and Thought


arndt@lymph.DEC writes:

> ...  does language really have THAT much control over how we think?

That depends on what you mean by `think'.

This is one of my pet theories.

At the very least, there are functional areas of the mind that perform
verbal reasoning.  This area maintains the continuous internal dialogue
that we all experience.  Most people identify this area as `I'.  There
are certainly non-verbal areas, too.  But this is not identified as the
self.  Consider, as an example, reflex actions: `I jumped out of the way
before I was even aware of it...'.  Other non-verbal areas influence
the `verbal-consciousness' with messages called `intuition'.

I believe that the reason we assign such importance the the verbal
consciousness is that we are social animals.  The importance of our
interactions with others of our ilk is so great that we tend to define
ourselves as that which others can experience.  Because language is the
primary means of communication with others, we percieve verbal
consciousness as being terribly important. Self-awareness would not exist
without the built-in social hooks.

Language, however, has little effect on the non-verbal areas of the mind.
A human in total isolation with no language experience could probably
function quite well with no internal dialogue. Many complex tasks, which
we would like to have computers emulate, are performed without language.

Comments?

					-- Mark Mendel 
					-- ...ihnp4!umn-cs!digi-g!mark

------------------------------

End of AIList Digest
********************

∂02-Dec-84  2145	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #169    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 Dec 84  21:45:38 PST
Date: Sun  2 Dec 1984 16:54-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #169
To: AIList@SRI-AI


AIList Digest             Sunday, 2 Dec 1984      Volume 2 : Issue 169

Today's Topics:
  Planning - Constraint Propagation and Planning,
  AI Systems - Crossword Puzzle Program? & Learn Program,
  Cognition - Dignostic Reasoning,
  Humor - State-of-the-Art Riddle Program,
  Knowledge Representation - OPS5 Disjunctions
----------------------------------------------------------------------

Date: 3:26 pm  Nov 29, 1984
From: chandra@uiucuxc
Subject: Constraint Propagation & Planning   


	Constraint   Propagation  in  Planning

	
	I am thinking of doing my theses on Planning. I read Mark Stefik's
theses on Planning with Constraints. I wanted to know if anybody has seen 
any other papers on constraint propagation applied to
	
		a) Planning

	   or   b) Blocks world problems....

	I am planning on a system that will generate constraints from the
physical interaction between blocks and use them to do heirarchical 
planning to achieve goals. Still thinking...
 
 - Navin Chandra

full arpa address is : chandra@uiucuxc@uiucdcs@RAND-RELAY.ARPA

Thank you

------------------------------

Date: 10:12 pm  Nov 23, 1984
From: davy@ecn-ee
Subject: Crossword Puzzles?                  


Anybody got a nifty program to fill in crossword puzzles? Basically I 
need something which, given a template of "white squares" and "black 
squares" and a list of words, will generate patterns of the words 
placed into the template. All the program has to do is stick the words 
in the holes and make sure all the vertical/horizontal combinations 
are really words; it doesn't have to handle clues, etc. 

Please mail responses to:

	{decvax, ihnp4, ucbvax}!pur-ee!davy
	ecn.davy@purdue.arpa

Thanks in advance,
--Dave Curry

------------------------------

Date: 12:32 pm  Nov 19, 1984
From: rjs@okstate
Subject: Learn

I am intersted in making a 'learn' system for xlisp 1.2+.  To do this
I find myself in need of many examples of not only style but
*ACTUAL WORKING CODE*.  If anyone who has something that currently works
could send me a copy, I will try to build a learn system and will
subsequently post it to the net.

To these ends, a few guidelines should be adheard to:
1.  These xlisp 1.2+ programs should be short, useful, and explain
    some '*function*' that shows xlisp's abilities as a language.
2.  Interest should be aimed at A.I. people and others that would like
    to learn xlisp in a cursory manner (i.e. two approaches).
3.  The 'version' of xlisp 1.2+ that we have has been modified via the
    following net notes:
        net.sources / mit-eddi!jfw / 12:23 am  Sep 19, 1984
        net.sources / mit-eddi!jfw /  4:21 pm  Sep 21, 1984
        net.sources / mit-eddi!jfw /  8:24 pm  Sep 21, 1984
        net.sources / mit-eddi!jfw /  8:56 pm  Sep 24, 1984
        net.sources / mit-eddi!jfw /  2:04 pm  Oct  9, 1984
        net.lang.lisp / ea!mwm /  1:52 am  Oct 13, 1984
    Any programs should be runnable on this system.

Many Thanks in Advance

Roland Stolfa (Stalfonovich),
Oklahoma State University

....!ihnp4!umn-cs!isucs1!\
.......!ucbvax!mtxinu!ea! > okstate!rjs
....!convex!ctvax!uokvax!/

------------------------------

Date: 7:21 am  Nov 12, 1984
From: robison@eosp1
Subject: Re: Diagnosing strategies for humans

This is a followup on the discussion of how doctors reason when
doing diagnoses:

>I don't think it would alarm anyone who does deductive reasoning a lot.
>The method described IS deductive reasoning.  As Sherlock Holmes once 
>observed: 'when all that is impossible has been removed, whatever remains,
>no matter how improbable, must be the truth.'  This doesn't prevent checking
>out the most probable (or the most easily tested) first.

Sherlock Homes did not, in my opinion, describe what doctors do.
In the first place, many tests are available to doctors, some simple
and inexpensive, to rule out the improbable.  Usually these tests are
not performed until the more likely cases are checked out.  A good
example is a diseased gall bladder.  Its common symptoms are similar
(depending upon how people report them) to lower backpain, ulcers,
and other forms of gastric distress, including viruses.  Doctors
almost always will do the more painful, and more expensive ulcer test
first (barium X-ray), before checking for gall bladder disease, which
is less common.

Sherlock Holmes always reasoned on the basis of very little
information, but he was careful to collect all he could at a given
moment, and then was ready to deduce from that the ONLY possibility,
however improbable.  Doctors will collect some of the information
easily available to them, and then deduce the most probable cause,
no matter how many possible causes are still not ruled out.

Please recall that I'm not flaming about all this.  Anyone who has
suffered from one of the less likely possibilities will prefer that
more deductive reasoning were used sooner; but I can appreciate that
doctors have a system that works a high percentage of time, and also
minimizes the number of tests required, at the cost of delaying correct
treatment to a relatively few cases.  I'm not sure that any alternative
would be better.

	- Toby Robison (not Robinson!)
	allegra!eosp1!robison
	or: decvax!ittvax!eosp1!robison
	or (emergency): princeton!eosp1!robison

------------------------------

Date: 4:34 pm  Nov 21, 1984
From: emneufeld@water
Subject: UNIX - ai                           

/*
	For all you ai-ers, here's a great state-of-the-art
	ai program that runs on UNIX.  Compile this program
	with the command

	cc riddle.c -o ridde -lcurses -ltermlib

*/


#include <math.h>
# include <sys/types.h>
# include <sys/timeb.h>
#include <curses.h>


main()
{
    int     i,j;
    char    a,b,c;
    savetty();
    initscr ();
    printw ("ask me a riddle...\n");
    refresh ();
    i = randy (10);
    j = 0;
    while ((c = getchar ()) != '\n') {
	if (i = j)
	    srand ((int) c);
	j++;
    }
    printw ("Gee! ");
    refresh ();
    sleep (2);
    printw (" That's a tough one...");
    refresh ();
    refresh ();
    for (i = 0; i < 10; i++) {
	printw (".");
	refresh ();
	sleep (1);
    };
    printw ("\nI give up !!  What's the answer?\n");
    refresh ();
    while (getchar () != '\n');
    for (j = 0; j < 100; j++) {
	i = randy (3);
	move (randy (31), randy (70));
	switch (i) {
	    case 0: 
		printw ("Hee hee!");
		break;
	    case 1: 
		printw ("Har har!");
		break;
	    case 2: 
		printw ("That's a good one!");
		break;
	    case 3: 
		printw ("Yuk, yuk!!");
		break;
	    default: 
		printw ("That's hilarious!");
		break;
	}
	refresh ();
    }
    endwin();
    resetty();
}

randy(i)
int i;
{
    i = (int) ((double) i * (double) rand () / (double) 017777777777);
    return (i);
}

------------------------------

Date: 7:01 pm  Nov 13, 1984
From: neihart
Subject: OPS5 disjuction dilemma.

        I have encountered a problem with ops5 as follows:  since
disjunctions  are  implicitly  quoted  (see  pg 18 of ops5 users'
manual), it is impossible to substitute a variable within a  dis-
junction.   This is needed in the following example where the ↑sd
field of a passtx should consist of a vector of 2 elements;  how-
ever,  it  doesn't  matter  which element is listed first, so the
condition element should succeed as long as the two elements  are
present  in any order.   If there were a method to call functions
for arguments of the condition elements, a function could  create
the  proper  disjunction, an admittedly clumsy solution; however,
the call mechanism only works on RHSs of productions!

        I have also considered using two  ↑sds,  ↑sd1  and  ↑sd2,
storing the least of the two numbers in ↑sd in ↑sd1 and the other
in ↑sd2, thereby eliminating the  vector  and  using  two  scalar
variables.  However, this won't work since the LHS of the produc-
tion is incapable of sorting the two variables (eg, <d> and  <in-
put1>  below),  providing the proper target variable for ↑sd1 and
↑sd2. How can I get around this and allow the LHS condition  ele-
ments  4  and 5 below to match regardless of the order of the two
↑sd arguments?

(p Dflipflop
  (inv ↑name <inv1>  ↑input <input1> ↑output <output1>)
  (inv ↑name <inv2>  ↑input <output1> ↑ output <output2>)
  (inv ↑name <inv3>  ↑input <enable> ↑output <output3>)
  (passtx  ↑name <tx1> ↑gate <enable> ↑sd  
;*** following line doesn't work since <d> and <input1> are taken literally.
	{<< <d> <input1> >> <temp1>} {<< <d> <input1> >> <temp2> <> <temp1>})
  (passtx  ↑name <tx2> ↑gate <output3> ↑sd 
;*** following line doesn't work since <output2> and <input1>
;*** are taken literally, rather than their values being used.
 {<< <output2> <input1> >><temp3>}{<< <output2> <input1> >><temp4> <> <temp3>})
-->
  (make Dff ↑name <inv1>  ↑clock <enable> ↑Q <output2> ↑Qbar <output1>)
  (remove 1 2 3 4 5)
)

------------------------------

Date: 12:25 pm  Nov 15, 1984  
From: paul@ctvax
Subject: OPS5 Disjunctions

The solution is simple (though a little ugly). Productions themselves
are disjunctions. The idea is that rules be aranged into disjunctive
form, then each disjunction is a separate OPS5-rule. and each rule
is itself a conjunction (with possible negations).

(p variant1
   ...ce's that bind <foo1> and <foo2> ...
   ( <foo1> <foo2> )
   -->
   (make found))

(p variant2
   ...ce's that bind <foo1> and <foo2> ...
   ( <foo2> <foo1> )
   -->
   (make found))

(p var1orvar2
   ...ce's that bind <foo1> and <foo2> ...
   (found)
   -->
   .... rhs goes here ...)

This way you can avoid duplication of the RHS.

paul.ct@CSNet-Relay
ctvax!paul

------------------------------

Date: 9:21 pm  Nov 17, 1984  
From: neihart
Subject: OPS5 Disjunctions

That certainly is a solution to the problem, but it gets inadequate
quickly.  The number of productions needed to express a production which
has n vectors, with m order-independent elements each, is m to the
n productions!  I've tried making a routine which would (build ..) these
productions automatically, however I've discovered that values in the
attribute-value pairs cannot be expressions which evaluate to a variable,
such as <x>!

------------------------------

Date: 9:36 am  Nov 18, 1984  
From: neihart
Subject: OPS5 Disjunctions

I've decided it is easier to make multiple version of the same thing in the
working memory, one for each possible permutation, than it is to just have
one copy with one or more complicated productions for matching.  All the
versions can have the same value in the ↑name field, so that as soon as one
is used, all working memory elements with the same name as the one just
used can be removed.  This is still a clumsy way to get around the problem,
but does anyone know of any better method?

------------------------------

End of AIList Digest
********************

∂04-Dec-84  0104	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #170    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Dec 84  01:03:48 PST
Date: Mon  3 Dec 1984 21:17-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #170
To: AIList@SRI-AI


AIList Digest            Tuesday, 4 Dec 1984      Volume 2 : Issue 170

Today's Topics:
  Administrivia- Missing Digest #166 & Digest Sequence,
  AI Tools - Franz Lisp -> Common Lisp & Languages for AI,
  Knowledge Representation - OPS5 Disjunctions,
  Cognition - A Calculus of Elegence,
  Seminars - AI Architectures at TI  (SMU) &
    Karmarkar's Algorithm  (SU)
----------------------------------------------------------------------

Date: Mon 3 Dec 84 21:05:48-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Missing Digest #166

I have reason to believe that digest V2 #166 did not make it
out to several (many?) sites before it was mysteriously deleted
from my system.  Let me know if you need a remailing.  The digest
included

  Philosophy - Dialectics and Piaget,
  Logic Programming - Book Review,
  PhD Oral - Nonclausal Logic Programming,
  Seminar - Learning Theory and Natural Language  (MIT),
  Conference - Logics of Programs

It should have gone out Friday or Saturday (Nov. 30 or Dec. 1).

                                        -- Ken Laws

------------------------------

Date: Mon 3 Dec 84 20:49:45-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Digest Sequence

Usenet readers may have noticed that digest issues 167-169 are missing.
These issues were sent only to Arpanet readers because they contained
only messages from the net.ai discussion -- the ones since Oct. 23,
when our gateway host went down.

                                        -- Ken Laws

------------------------------

Date: 30 Nov 1984 1219-EST
From: Scott Fahlman <FAHLMAN@CMU-CS-C.ARPA>
Subject: Franz Lisp -> Common Lisp

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

A number of people around CMU and CGI have now successfully translated
Franz Lisp programs into Common Lisp.  In general, people seem to have
little trouble moving big programs over, but people who have not yet
done this are understandably apprehensive.  If the people who have
experience in this area want to send me a brief description of the
things to look out for or that caused them trouble, I will try to merge
these experiences into a short guide for people faced with this kind of
task in the future.

-- Scott

------------------------------

Date: Mon, 3 Dec 84 08:15 EST
From: D E Stevenson <dsteven%clemson.csnet@csnet-relay.arpa>
Subject: Languages for A. I.

I would like to compile a list of all language systems which have been
implemented / proposed for artificial intelligence purposes.  I would
appreciate lists, pointers, and vague recollections from anyone within
the community.  I will be glad to forward any resulting document.

steve
(803) 656-3444

------------------------------

Date: 2 Dec 1984 2124-PST (Sunday)
From: ricks%ucbic@Berkeley (Rick L Spickelmier)
Subject: OPS5 Disjunctions

I have found the following to be a solution to disjunction
in the type of problem you (neihart) have come up against.

If you are trying to represent a pass-transistor with permutable terminals,
just remove the permutable terminals from the pass-transistor working
memory element and make seperate working memory elements for each permutable
terminal, such as the following:

(passtx ↑name <tr1> ↑gate <gate>)
(terminal ↑parent <tr1> ↑name <tag1> ↑type sd ↑node <sd1>)
(terminal ↑parent <tr1> ↑name <tag2> ↑type sd ↑node <sd2>)

Then your rule can be:

(p Dflipflop
  {(inv ↑name <inv1>  ↑input <input1>  ↑output <output1>) <inv1>}
  {(inv ↑name <inv2>  ↑input <output1> ↑output <output2>) <inv2>}
  {(inv ↑name <inv3>  ↑input <enable>  ↑output <output3>) <inv3>}
  {(passtx ↑name <tx1> ↑gate <enable>)  <passtx1>}
  {(terminal ↑parent <tx1> ↑name <tag1>    ↑type sd ↑node <input1>) <term1>}
  {(terminal ↑parent <tx1> ↑name <> <tag1> ↑type sd ↑node <d>)      <term2>}
  {(passtx ↑name <tx2> ↑gate <output3>) <passtx2>}
  {(terminal ↑parent <tx2> ↑name <tag2>    ↑type sd ↑node <output2>) <term3>}
  {(terminal ↑parent <tx2> ↑name <> <tag2> ↑type sd ↑node <input1>)  <term4>}
  -->
  (make Dff ↑name <inv1>  ↑clock <enable> ↑Q <output2> ↑Qbar <output1>)
  (remove <inv1> <inv2> <inv3>)
  (remove <passtx1> <passtx2> <term1> <term2> <term3> <term4>))

Note:  I have run examples using the above and using a single working memory
element for a pass-transistor with seperate rules for each allowable
permutation - and in all cases I have tried, adding extra rules gives
better performance than adding more working memory elements
(but if you are interested in readability, the extra working memory
element representation is better).

            Rick L Spickelmier (ricks@berkeley)
            Electronics Research Laborartory, UC Berkeley

------------------------------

Date: 3 Dec 84 1531 EST (Monday)
From: Lee.Brownston@CMU-CS-A.ARPA
Subject: OPS5 disjunction problem

A better solution is to remove the sd values from the passtx working memory
element.  If the name fields of the passtx wme's contain unique values, then
two sd children can be created which point to their common parent.

(literalize passtx
  name              ; a unique value
  gate)

(literalize sd
  passtx            ; the same value as the "name" field of the passtx parent
  value
)

When the passtx is made, the two sd children are linked to it.

-->
...
(make passtx ↑name   <passtxname>
             ↑gate   <passtxgate> )
(make sd     ↑passtx <passtxname>
             ↑value  <sd-value-1> )
(make sd     ↑passtx <passtxname>
             ↑value  <sd-value-2> )
...

Then it is easy to test the disjunction because the sd elements are unordered
(except by time tag, which is not used in matching).

(p Dflipflop
  (inv     ↑name   <inv1>
           ↑input  <input1>
           ↑output <output1> )
  (inv     ↑name   <inv2>
           ↑input  <output1>
           ↑output <output2> )
  (inv     ↑name   <inv3>
           ↑input  <enable>
           ↑output <output3> )
  (passtx  ↑name   <tx1>
           ↑gate   <enable>  )
  (sd      ↑passtx <tx1>
           ↑value  <d>       )        ; is this where <d> is to be bound?
  (sd      ↑passtx <tx1>
           ↑value  { <input1> <> <d> } )
  (passtx  ↑name   <tx2>
           ↑gate   <output3> )
  (sd      ↑passtx <tx2>
           ↑value  <input1>  )
  (sd      ↑passtx <tx2>
           ↑value  { <output2> <> <input1> } )
-->
  (make Dff ↑name  <inv1>
            ↑clock <enable>
            ↑Q     <output2>
            ↑Qbar  <output1> )
  (remove 1 2 3 4 5 6 7 8 9)
)

Might I take this opportunity to make a plug for a forthcoming book on OPS5?
It is called "Programming Expert Systems in OPS5," and is to be published in
mid-April by Addison-Wesley.  The authors are Lee Brownston (CMU), Robert
Farrell (Yale), Elaine Kant (CMU), and Nancy Martin (Wang Institute).

------------------------------

Date: Thu, 29 Nov 84 11:37 EST
From: Steven Gutfreund <gutfreund%umass-cs.csnet@csnet-relay.arpa>
Subject: A calculus of elegence (re: kludge v2 #162)

I found your definition of Kludge very interesting. In the sense you
use it, it seems to be the antonym of Elegance (a term frequently heard
in mathematical circles). The problem is I have never seen a precise
definition of elegance. Would you like to try and produce a definition
for it?

1. What is it about a representation schemas (symbolic/anologic) that
   lead mathematicians or programers to consider them to be elegant
   representations of a problem?

2. Does elegance extend beyond to domain of symbolic representations
   to what David Smith (Pygmallion) called Non-Fregean systems such
   as art (paintings)? Do we call this esthetics?

3. Is there a calculus of esthetics? Can we capture its properties
   in a formal system (axiomatic) or does it correspond to
   reasoning structures inside the brain (a dual of K-lines)?

                                        - Steven Gutfreund

------------------------------

Date: Sat, 1 Dec 1984  01:06 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: A calculus of elegence (re: kludge v2 #162)

  Dear Steven,

Mathematical elegance must be a lot of things; generally it includes
economy and surprise: the sense of getting more from something than
one expected.  I don't believe it very often pays to "define" a
commonsense word, because it includes too many unrelated things -- or
ones which cannot be related except within some larger psychological
theory.  A sounder approach would be to define half a dozen influences
which might contribute, and make separate theories of them.  In
Poincare's famous essay on unconscious mathematical creativity, he
leaves open the question of how the unconscious mind decides when its
mathematical efforts have produced a structure which might be worthy
of the conscious mind's attention.

In fact, I would say that, rather than try to define mathematical
elegance, one would better spend the time refining a system which uses
criteria of possible mathematical value -- e.g., Lenat's AM and
Eurisko systems.  Then, when we better understand a way to make a
system make mathematical discoveries, we can return to speculate about
how human minds do such things.

As for a calculus of esthetics, that probably reflects even more
varied cultural acquisitions.  There was a whole book about this, in
the 1920's I think, called "Esthetic Measure", by Gearge. D. Birkhoff,
a great mathematician.  Here are some of my views, from a page in my
not-quite-finished new book.

PAGE 47:  STYLE

Why do we like so many things which seem to have no earthly use?
We often speak of this with mixtures of defensiveness and pride.

     "Art for Art's sake."
     "I find it aesthetically pleasing."
     "I just like it."
     "There's no accounting for it".

"There's no accounting for" sounds like a guilty child who's been told
to keep accounts.  "I just like it" sounds like one is hiding reasons
too unworthy to admit.  Why do we take our refuge in such vague,
defiant principles? Indeed, we @B{ought} to feel ashamed of doing
things that have no use -- if it is written in our self-ideals that it
is bad to squander time.

However, there are practical reasons to maintain stylistic
preferences.  Here are some reasons why it makes sense to make
choices which are empty in themselves, so long as they are based on
predictable, coherent uniformities.

SIMPLICITY: The legs of a chair work equally well if made square or
round.  Then, why do we tend to choose our furniture according to
systematic style or fashions?  Because they make it easier to
understand whole scenes; you can more quickly see which things are
similar.

DISTRACTION:  The purpose of a picture's frame is normally to
circumscribe its boundary.  Too much variety might distract viewers
from the pictures themselves.  Thus, the more easily the style of the
frames can be  identified -- even if by encrusting them with
ornaments --  the frames themselves can be more easily ignored.

CONVENTION: It makes no difference whether a single car drives on
the left or on the right. But when there are many cars, they must do
the same, one way or the other, or they'll crash.  Societies need
rules which make no sense at all for single individuals.

It saves a lot of mental work, to make each choice the way you did
before. To find out what to do, just use find the rule in memory.
Strangely enough, this principle can be the most valuable, when the
situation it applies to is the least critical, because of the
following principle:

FREDKIN'S PARADOX: The more equal seem the two alternatives, the
harder it will be to choose -- yet the more equal they are, the less
the choice matters.  Then, the more time spent, the more time lost.

It is no wonder that we find it hard to account for "taste" -- since,
often, it depends on all the rules we use when ordinary reasons
cancel out!  Does this mean that Fashion, Style, and Art are all the
same? No, only that they have the common quality that their diverse
forms of sense and reason are further than usual from the surface of
thought.  This is why, when we use stylish ways to make decisions, we
often feel a sense of being "free" from practicalities.  Those
decisions would seem more constrained, were we aware of how they're
made.

When should one give up reasoning and take resort to rules of style?
Only when we're fairly sure that further thought will just waste
time.  What are those fleeting hints of guilt we feel for liking
works of art? Perhaps they're how our minds remind themselves  to not
use rules, which we don't understand, too recklessly.

------------------------------

Date: Sat, 1 Dec 84 07:36:16 cst
From: leff@smu (Laurence Leff)
Subject: Seminar - AI Architectures at TI  (SMU)

Department of Computer Science and Engineering Seminar
    Southern Methodist University

SPEAKER: Dr. Satish Thatte
         Computer Science Laboratory Group
         Texas Instruments Incorporated
         Dallas, Texas

TOPIC: Computer Architectures for Artificial Intelligence

TIME: 3:00-4:00 p.m., Wednesday, December 5, 1984
PLACE: 315 Science Information Center, SMU

ABSTRACT: The seminar will cover the research on computer architecture for
symbolic processing and artificial intelligence at Texas Instruments.  Our
work is concentrated on three major areas: memory system architecture,
language and compiler technology, and symbolic processor design.  The memory
system research is aimed at developing a "uniform memory abstraction" that
comprehends a very large, recoverable, garbage-collected, virtual memory
system to support short-lived, as well as persistent objects.  Such a memory
system is expected to play a crucial role in supporting large,
knowledge-intensive artificial intelligence applications.  The language nad
compiler technology is based on the language SCHEME, a powerful and elegant
dialect of LISP.  The processor design effort is based on using the Reduced
Instruction Set Computer (RISC) philosophy to implement a virtual machine
that supports the SCHEME language, as well as the uniform memory abstraction.

------------------------------

Date: Sun 2 Dec 84 17:10:28-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Seminar - Karmarkar's Algorithm  (SU)

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

12/6/84 - Irvin Lustig (OR Dept. - Stanford)

   "Karmarkar's Algorithm:  Theory, Practice, and Unfinished Business"

Recent articles in Science Magazine and the New York Times have
brought to light a new algorithm for Linear Programming by N.
Karmarkar.  The excitement created by this discovery in the Operations
Research and Computer Science communities is understandable,
considering the spectacular nature of the reported results.  In my
talk, I will discuss the theoretical result of Karmarkar, some of the
practical considerations of the algorithm, and how this algorithm is
leading to new heuristics for Linear Programming.  I will also explain
how the result has not yet been shown to be practically efficient,
even though fairly good results have been reported in the news media.

Time and place: December 6, 12:30 pm in MJ352 (Bldg. 460)

                                                - Andrei Broder

------------------------------

End of AIList Digest
********************

∂06-Dec-84  1355	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #171    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Dec 84  13:55:01 PST
Date: Thu  6 Dec 1984 09:40-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #171
To: AIList@SRI-AI


AIList Digest            Thursday, 6 Dec 1984     Volume 2 : Issue 171

Today's Topics:
  Applications - MACSYMA,
  AI Tools - XLISP Source & Franz Lisp -> Common Lisp,
  Humor - Typagrophical Erorrs,
  AI News - Recent Articles,
  Algorithms - Sorting Malgorithm,
  Knowledge Representation - OPS5 Disjunctions,
  Seminars - Scheme Overview  (Yale) &
    Principles of OBJ2  (MIT) &
    QUTE Functional Unification Language  (IBM-SJ)
----------------------------------------------------------------------

Date: Tue, 4 Dec 84 13:52 CDT
From: Joyce←Graham <jgraham%ti-eg.csnet@csnet-relay.arpa>
Subject: References to MACSYMA applications

I am putting together a little pitch for the TI Journal on the usefulness
of MACSYMA.  What I would like are references to articles about projects
that made use of MACSYMA.  I would also welcome any folklore that may be
floating around.  Can anyone help me?

Joyce Graham
Texas Instruments Incorporated
Post Box 801
M/S 8007
McKinney, TX  75069

from Arpanet - jgraham%ti-eg@csnet-relay
from Csnet   - jgraham@ti-eg

------------------------------

Date: Wed, 5 Dec 84 14:46:02 PST
From: Randy Schulz <lcc.randy@UCLA-LOCUS.ARPA>
Subject: Wanted: xlisp source

I'd like to find out how to get the source for version 1.2 of xlisp.
I'll be using it on a Macintosh, and compiling it with the Manx C
compiler.  If there are multiple versions of the source, I'd like to
get the one most appropriate to that environment.  Thanx in advance.

                                                Randy Schulz
                                                Locus Computing Corp.

                                                lcc!randy@ucla-cs
                                          trwrb!lcc!randy
                {trwspp,ucivax}!ucla-va!ucla-cs!lcc!randy
         {ihnpr,randvax,sdcrdcf,ucbvax}!ucla-cs!lcc!randy

------------------------------

Date: Wed, 5 Dec 1984  13:22 EST
From: "Scott E. Fahlman" <Fahlman@CMU-CS-C.ARPA>
Subject: Franz Lisp -> Common Lisp


Since my post appeared on this list (and thus received wider circulation
than I had really intended) I've had a number of requests for the Franz
Lisp to Common Lisp Conversion Guide.  When and if this document (or any
other conversion guide) is available, I'll put it in some place easily
accessible via arpanet and will send a pointer to AIList.  Don't
hold your breath, however.  So far, the response from people who have
done conversions is underwhelming, and while I would like to see this
document come into being, I do not have the time to go re-learn Franz
and gain the relevant conversion experience myself.  All I can say at
present is that the people who have done Franz to Common Lisp
conversions have reported very little trouble.

------------------------------

Date: Mon, 3 Dec 84 9:42:45 EST
From: Pete Bradford (CSD UK) <bradford@Amsaa.ARPA>
Subject: Typagrophical Erorrs.


        Those who, like me, enjoyed the Palm Springs Desert Sun paragraph
which was reprinted in the New Yorker would enjoy an article in the just
published Winter edition of the British periodical, Punch. The article is
entitled 'Wernit'.
        I cannot possibly describe it (is this a deficiency of the English
language?!), but Punch is widely available over here, at better bookshops
and in most college libraries.  Bear in mind when reading it that the
computer referred to belongs to the British newspaper 'The Guardian', and
that this paper is notorious for its typos.

                        Good reading,
                                PJB

------------------------------

Date: Sat, 1 Dec 84 06:03:54 cst
From: leff@smu (Laurence Leff)
Subject: AI News


Electronics Week, November 19, 1984
ICOT Details Its Progress.  Reports on work done on prolog
machines, a new logic language called Mandala. page 20


IEEE Transactions on Software Engineering, Sept 1984, Volume SE-10, No 5
Reusability Through Program Transformations - discusses using a
transformation-based system to convert a lisp program to Fortran. page 589

Empirical Studies of Programming Knowledge. - this is a cognitive science
study on the use of plans by experts and novice programmers.  Should
be of interest to those following the Plan Calculus work from MIT. page 595


IEEE Computer October 1984,  Volume 17 1984
This is their Centennial Issue.  The articles here are summaries of
various divisions of computer research and practice.
Relevant articles are "Knowledge-Based Expert Systems" by Frederick
Hayes-Roth, Robotics by John F. Jarivis, Computing in Medicine by
K. Preston Jr. et. al. and Speech Processing by Harold Andrews


IEEE Spectrum December 1984,
A one column article on Do What I Mean facilities.  page 29


Electronics Week November 26, 1984, page 50:
Article on venture between Isis Systems Ltd and Imperial Chemical
Industries to market expert systems.


Infoworld November 12, 1984 page 36-41
Article on marketing natural language interfaces for microcomputers.


Datamation, November 1 1984
Page 10, the following sentence was found in their Look Ahead section:
"TRW, the big defense contractor, is looking for some 500 symbolic
processors (Lisp Machines, that is) for use in a global weather
mapping application.

"The Overselling of Expert Ssytems" by Gary R. Martins page 76
Rather scathing attack on AI.  If you enjoyed Drew McDermott's Artificial
Intelligence meets Natural Stupidity, you should read this one too.

"The Blossoming of European AI" by Paul Tate page 85
discusses work by Imperial Chemical Industries, Elf Aquitaine,Schlumberger
and Framentec (set up by Teknowledge).  Sinclair has announced a Prolog
for one of its home machines and expects to have expert system products
out for it soon.  Also Expert System Internationals has announced
ES/P Advisor for $1300.00 (runs on 16 bit micros).  Also has discussions
of management reactions to AI and work done on along the lines of R/1.

"AI and software Engineering" by Robert Kowalski page 92
Talks about using AI techniques to handle a program to work with the
British Naturalization Act.  Presents AI as a technique like decision
tables, dataflow diagrams to improve productivity in general software
development, e.g. business sytems.

Page 163: review of about 10 books on AI.


Electronic Week, November 5, 1984 page 24
Discusses DARPA automated vehicle effort.


Electronics Week, December 3, 1984.
Cautiously Optimistic Tone Set for Fifth Generation Page 57-63 (Note
that this is a six page article.)

Discussed progress of Japanese ICOT effort. In the words of Susan Gerhart who
was quoted in the article, "The single thing that impresses me the most did
not really come out clearly at the conference but did at the ICOT open-house
demonstration the next week; it was that so much new stuff was all working
together -- new hardware, basic software, and application demos--all of it
based on logic programming."  Note that the *operating system* for the new
system is written in a logic programming language called KL1.

------------------------------

Date: 5 Dec 84 1806 EST (Wednesday)
From: Lee.Brownston@CMU-CS-A.ARPA
Subject: A baaaadalgorithm for sorting

One way to make a sort of n items very expensive is to compute the set of all
n! permutations of the n items and map each permutation onto its Godel number.
(One can find opportunities to dawdle in generating primes, too.)  Finding
the sorted permutation is equivalent to finding the minimum or maximum
Godel number if the Godelization preserved order.  This can be accomplished
by sorting the Godel numbers.  Thus, the problem of sorting n items has been
"reduced" to that of permuting, Godelizing, and sorting n! integers.  The
recursion cannot be infinite, of course, but may stop as soon as the use
of resources exceeds that of some turkey who thinks he has come up with a
slower sort.

------------------------------

Date: 4 Dec 1984 0942-PST (Tuesday)
From: ricks%ucbic@Berkeley (Rick L Spickelmier)
Subject: More on OPS5 Disjunctions


The idea of separating the 'sd' field from the 'passtx' element
and creating separate elements for each 'sd' were presented in
two submissions (ricks%ucbic@berkeley and Lee.Brownston@CMU-CS-A).
I would like to point out a difference that looks like it is important
in the original application (of neihart).

Lee's submission distinguished the two 'sd' elements by making sure
they were not connected to the same node (the 'value' attribute).
In this particular example it does not make sense to tie the two 'sd's
together, but in general, you may want to connect two or more of these
type of terminals (from a single element) to the same node (mosfets
used as capacitors have their source and drain connected together,
and in TTL design, nand gates are occassionally used as inverters by
tying their inputs together).

The above argument is why I put unique tags on each 'sd' working memory
element so this could be used to distinguish them, and thus allowing them
to be tied to the same node.

            Rick Spickelmier (ricks@berkeley)
            Electronics Research Laboratory, UC Berkeley

------------------------------

Date: 3 Dec 1984  16:26 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Scheme Overview  (Yale)

        [Forwarded from the MIT bboard by SASW@MIT-MC.]

                    AI Revolving Seminar

                An Overview of Yale Scheme

                       Jonathan Rees


        Wednesday   12/5/84     4:00pm      8th floor playroom

Yale Scheme, also known as T, was developed over the past three years by
the Yale Computer Science Facility.  It is being used as a production
Lisp system at Yale, UCLA, and elsewhere.  It features a compiler which
generates native VAX and MC68000 code and compiles closure-intensive
code efficiently enough that closures may be used in preference to
record structures for many applications which are space- or
time-critical.  I will discuss how the language and implementation work
and how T is different from other Scheme and Lisp systems, and give a
list of what I consider to be unsolved problems in the design of
Scheme-like languages.

------------------------------

Date: 4 Dec 1984 1105-EST
From: ALR at MIT-XX
Subject: Seminar - Principles of OBJ2  (MIT)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


"Principles of OBJ2"

Jean Pierre Jouannaud
University of Nancy (France) and SRI

Friday, December 7, 1984
Refreshments at 3:00 pm, talk at 3:15 pm
Room NE43-453


OBJ2 is an object-oriented language with an underlying formal
semantics based on equational logic and an operational semantics
based on rewrite rules.  Key OBJ2 principles are:

1.  Use of parameterized modules (Objects and Theories).  Objects
encapsulate executable code (e.g. rewrite rules), whereas Theories encapsulate
assertions that may be nonexecutable (e.g. first order formulae).

2.  Specification of interface requirements for parameters (Views).

3.  Use of Module Expressions for creating complex combinations of modules.

4.  Use of subsorts to support:

        a simple yet powerful form of polymorphism (overloading).

        partially defined operations (use of "sort-constraint").

        a simple yet powerful and automatic form of error-recovery.

5.  Use of user defined "built-ins", e.g. low level data types described in the
implementation language itself, e.g. MACLISP.  "Built-ins" are first class
objects, e.g. all other construct apply to them, including subsort definitions.


We will discuss these principles by means of examples of OBJ
specifications and point out the main implementation issues.


HOST:  Prof. Guttag

------------------------------

Date: Wed, 5 Dec 84 16:59:47 PST
From: IBM San Jose Research Laboratory Calendar
      <calendar%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - QUTE Functional Unification Language  (IBM-SJ)

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193


  Mon., Dec. 10 Computer Science Seminar
  10:30 A.M.  QUTE:  A FUNCTIONAL LANGUAGE BASED ON UNIFICATION
  Aud. B      A new programming language called Qute is introduced.
            Qute is a functional programming language which
            permits parallel evaluation.  While most functional
            programming languages use pattern matching as basic
            variable-value binding mechanism, Qute uses
            unification as its binding mechanism.  Since
            unification is bidirectional, as opposed to pattern
            match which is unidirectional, Qute becomes a more
            powerful functional programming language than most of
            existing functional languages.  This approach enables
            the natural unification of logic programming language
            and functional programming language.  In Qute it is
            possible to write a program which is very much like
            one written in conventional logic programming
            language, say, Prolog.  At the same time, it is
            possible to write a Qute program which looks like an
            ML (which is a functional language) program.  A Qute
            program can be evaluated in parallel
            (and-parallelism) and the same result is obtained
            irrespective of the particular order of evaluation.
            This is guaranteed by the Church-Rosser property
            enjoyed by the evaluation algorithm.

            M. Sato, Kyoto University
            Host:  J. Halpern

------------------------------

End of AIList Digest
********************

∂06-Dec-84  1853	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #172    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Dec 84  18:52:51 PST
Date: Thu  6 Dec 1984 13:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #172
To: AIList@SRI-AI


AIList Digest             Friday, 7 Dec 1984      Volume 2 : Issue 172

Today's Topics:
  Linguistics - Indonesian & Aymara' & Translation & Deficiencies
  Conference - Theoretical Approaches to Natural Language Understanding
----------------------------------------------------------------------

Date: 3 Dec 84 08:48 PST
From: Newman.pasa@XEROX.ARPA
Subject: Indonesian


In reply to the note from rob@ptsfa about "Indonesian".


Just one question in regard to your note about "Indonesian". Do you mean
that all dialects spoken in Indonesia have the features that you
mention? Or do you mean that the official language of Indonesia (called
Bahasa Indonesia I believe) has these features? Could you be more
specific? It has been many years since I lived in Indonesia, and I never
really learned enough of the language to have an opinion about your
assertions, but I do know that there are many languages spoken in
Indonesia, and that what you say may be true of any number of these
languages.


>>Dave

------------------------------

Date: Sat, 1 Dec 84 20:20:49 pst
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Andean interlingua monograph.

Some mention has been made on this list recently of Aymara'
(that is an accent mark), an Andean language purportedly used
successfully by Iva'n Guzma'n de Rojas of La Paz, Bolivia, as
an interlingua for machine translation.  An article appears
in today's New York Times (Saturday, December 1) on page 4.
Probably of most interest to those involved in the interlingua
debate will be a reference to a 150 page monograph by Mr. Guzma'n
(no title given) published by the International Development
Research Center in Ottawa.  The article also mentions that
Mr. Guzma'n uses ``three-valued formulas, following the Polish
scientist Jan L/ukasciewicz'' to represent the Aymara' logic.
The remainder of the article seems to largely repeat what has
previously been cited on this list from articles in the Los
Angeles Times.
                                                -- Harry

------------------------------

Date: Mon, 3 Dec 84 9:16:12 EST
From: Pete Bradford (CSD UK) <bradford@Amsaa.ARPA>
Subject: Translation.

        The October 1 Electronic News article on Japanese-English translation
reminds me.........

        A young guy in the Pentagon devised this remarkable program to
translate English into Russian, and vice-versa.
        The Secretary of State for Defense was to visit his office and be
given a demonstration of the system.  On the arrival of the 'big-wig', our
hero asked him if he had a phrase he would like translated.  "What about 'The
spirit is willing, but the flesh is weak'?" asked the top man.
        This phrase was duly typed in, and after much flashing of lights etc,
the Russian translation appeared on the screen.   This was smugly read out to
the Secretary of State who pointed out, rather sheepishly, that he did not
speak Russian and was in no position to judge the quality of the translation.
        Things were about to break up in a very unsatisfactory and embarrassing
manner when our hero yelled "I've got it!  I'll just reverse the polarity of
the program and feed back the phrase it just came up with!".   "Brilliant!"
gulped his Director, recovering just sufficiently from his recent apoplexy
to enable him to talk again, "Let's do that.".
        The Russian phrase was then fed back into the machine which had now
been switched into the Russian-English mode, and the small crowd waited
expectantly while more lights flashed and blinked.
        They say that the Director is still recovering in George Washington
Hospital and our hero has, of course, given up all thoughts of a successful
carreer in the Defense Department.  How was he to know the program would play
such a dirty trick on him.  It certainly performed the translation it had
been asked to do, but it did seem too loose or colloquial a translation -
'The whisky's OK, but the meat's lousy!....

------------------------------

Date: Monday, 03 Dec 84 10:46:51 EST
From: thompson (ross thompson) @ cmu-psy-a
Subject: Language deficiencies (or wife beating)

There was a mention earlier on this bboard that it is often difficult
to answer questions, because there is an implication which is not
true contained in the question.  The example given was the question
"Do you persist in your lies?"  A more well known example of the same
phenomenon is the classic "Have you stopped beating your wife?"

I don't know a lot about eastern religions, and I am sure I will be
shot down in flames for going out on a limb, but I believe that Zen
provides us with at least one answer to this problem.  If, in response
to a question, you reply "Mu," then you have ``unasked'' the question.
The situation in which you do this is precisely what is described above.

The interesting thing about this (to me) is not what word they chose,
but the fact that there is an excepted linguistic practice among
these people for dealing with what many people around here
would call a ``deficiency.''
                                        Ross Thompson

------------------------------

Date: Mon, 3 Dec 84 10:00:59 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: Communication


        A General Semanticist named Harrington
        Returned from his colloquy swearing ten
        Natives could count
        Only half the amount
        Any properly trained investigator could while ignoring the fact
          that it was not really counting that they were doing, but
          rhyming.

Ancient chestnut from anthropological lingustics:

        Anthropologist (pointing):  What's that?
        Native: <forefinger>
        Anthropologist (pointing again):  And what's that?
        Native: <forefinger>

        (This goes on for a while.)

        Anthropologist:  You see, they have only the word <forefinger>
        for all these things, and make up for their deficient vocabulary
        by grunts and gestures.

(In fairness, this combines the (true) `finger' story with the persistent
canard about a `primitive language with grunts and gestures'.)

Opinion:  language, properly speaking, is principally a means of transmitting
information.  It happens to be used together with representational and
gestural systems (including the gestural system we know as intonation and
inflection) as a means of communicating a great deal more than (and sometimes
contrary to) the bare-bones information that it transmits.  See e.g.
Z. S. Harris, Mathematical Structures of Language, esp. ch 2 `Properties
of language relevant to a mathematical formulation'.

(By `contrary to' I refer to irony and the like.  Though I know no
Dutch, I bet there are instances where speakers say something like `That
must have been a gezellig meeting!', referring to e.g. a collection of
`strange bedfellows' brought together by political expedience.)

Much of this discussion confuses linguistic competence with communicative
competence.  Communicative competence boils down mostly to skills in
engaging others in a willing desire to communicate and understand.

        I have seen an affable extrovert on a Greek train
        communicate quite well with speakers of at least three
        languages of which he knew perhaps two words each.  (My
        companion identified Hungarian and Slovenian, I recognized
        German.) A gezellig time was had by all, proof positive of
        satisfactory communication (whether or not much information
        is transmitted), and liquor played a miniscule role.

        I have seen fluent speakers of the same dialect of English
        unable even to transmit information to one another, because
        of their abject failure to communicate.  And so have you.

Stereotypically, right-limbic communicative skills are best developed and
exemplified by women in western cultures, and by Japanese and Chinese
cultures in our reluctantly waning ethnocentricity.  What we call `small
talk' (software aside).

(Is there any AI work miming right-cerebral and right-limbic functions,
other than visual pattern perception?)

An important part of `engaging others in a willing desire to communicate
and understand' is the range of what I call gestures of solidarity--
affirming that we are comembers of the same gezellig in-group.  Jargon
plays a central role, especially in an electronic-mail environment.
Denigration of outsiders is felt necessary when the boundaries have not
yet been clearly defined and the door so to speak is not yet shut or
when an unwelcome interloper is suspected.  There are many unconscious
and semiconscious identifiers of class, ethnos, region, and so on in the
range of vocabulary choice and pronunciation (dialect), application of
standard or nonstandard grammatical rules (to call them standard and
nonstandard of course begs the sociological question), shared references
(`remember the old man who bought licorice there') and so on.

(Fade to track of Frank Sinatra crooning `Gezelligheid is made of this'.
Bring up following quote from Harris op. cit. 216:

        . . . the very simplicity of this system, which
        surprisingly enough seems to suffice for language, makes
        it clear that no matter how interdependent language and
        thought may be, they cannot be identical. It is not
        reasonable to believe that thought has the structural
        simplicity and the recursive enumerability which we see
        in language.  So that language structure appears rather
        as a particular system, satisfying the conditions of
        [the chapter cited above], . . . which is undoubtedly
        necessary for any thoughts other than simple or
        impressionistic ones, but which may in part be a rather
        rigid channel for thought.)


Bruce Nevin, bn@bbncch

------------------------------

Date: Tue, 4 Dec 84 15:59:27 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: re: saying the unsayable


      > Languages are not differentiated on the basis of what is
      > possible or impossible to say, but on the basis of what
      > is easier or harder to say.

                                        --Larry Wall (V2 #167)

My understanding of tense morphemes is that they have the same semantic
relation to adverbs of time that pronouns and classifier nouns have to
nouns: having said the adverb, the tense morpheme is obligatory; having
said the tense morpheme, certain non-specific adverbs (`in the future',
`in the past') need no longer be said.  But whether a given language has
a particular tense morpheme or not, the equivalent information about
temporal relationships may be expressed with adverbs of time or
conjunctions (`before', `after').  (Cf.  Harris, A Grammar of English on
Mathematical Principles, 265-79.)

I don't think even Whorf claimed that Hopi lacked all adverbs of time
and temporal conjunctions.

Achumawi, a Hokan language of northern California on which I have done
some work, has a dual number, like Classical Greek.  The dual is
obligatory whenever referring to a pair of something.  Having said the
dual suffix, the actual noun pair is almost always tacit.  The dual is
also obligatory in direct address to one's mother-in-law (if a man) or
father-in-law (if a woman), and also in certain religious invocations
and prayers.  A whole range of nuance expressing social relationships
and attitudes is thereby easy to express in Achumawi and awkward in
English.  But the `objective information' is transmitted in a pretty
obvious way, even in English.  Sometimes, the `objective information'
transmitted is that the speaker is referring to a pair; sometimes, the
`objective information' is that the speaker affirms a certain special
deference with respect to the intended audience.  Irony and the like can
complicate this further.  In each case, the dual suffix is an obligatory
choice given presence of certain explicit constructions; and given the
presence of the dual suffix those constructions need no longer be
explicitly said and are instead tacitly understood.

Honorifics in Japanese present a rich field for issues of this sort.
Indeed, every language abounds with reductions of explicit constructions
to concise, nuance-laden forms.

Translating from a nuance-laden reduced form to an explicit,
spelled-out, fully explanatory form always loses the impact that the
reduced form has on a native speaker.  Closely analogous to translating
a joke.  Which is why `getting' native humor is such an excellent test
of fluency.  (For many years, anthropologists speculated whether or not
American Indians joked!  My experience suggests they probably were the
frequent butts of deadpan setups and putons.)

Ross recently proposed differentiating languages along a Macluhanesque
`hot/cool' spectrum according to how easy or difficult it is to recover
tacit information from under pronominal references; an article in the
last issue of Linguistic Inquiry reviews and extends this work.  (Sorry,
I don't have either reference at hand.)

        Bruce Nevin, bn@bbncch

------------------------------

Date: Tue 27 Nov 84 11:10:47-PST
From: ISRAEL@SRI-AI.ARPA
Subject: Conference - Theoretical Approaches to Natural Language Understanding

                       CALL FOR PAPERS

                        WORKSHOP ON

                   Theoretical Approaches to
                Natural Language Understanding

                    Dalhousie Univeristy
                    Halifax, Nova Scotia
                    28-30 May, 1985

General Chairperson: Richard Rosenberg, Mathematics Department,
Dalhousie University, Halifax, N.S. B3H 4H8

Program Chairperson: Nick Cercone, Computing Science Dept., Simon
Fraser University, Burnaby, B.C. V5A 1S6

Theoretical Approaches to Natural Language Understanding is intended
to bring together active researchers in Computational Linguistics,
Artificial Intelligence, Linguistics, Philosophy, and Cognitive
Science to discuss/hear invited talks, papers, and positions relating
to some of the 'hot' issues regarding the current state of natural
language understanding.  The three areas chosen for discussion are
aspects of grammars, aspects of semantics/pragmatics, and knowledge
representation.  In each of these, current methodologies will be
considered: for grammars - theoretical developments, especially
generalized phrase structure grammars and logic-based meta-grammars;
for semantics - situation semantics and Montague semantics; for
knowledge representation - logical systems and special purpose
inference systems.

Papers are solicited on topics in any of the areas mentioned above.
You are invited to submit four copies of a paper (double-spaced,
maximum 4000 words) to the program chairman: Nick Cercone, before 12
January, 1985.  Authors will be notified of acceptances by 27
February.  Accepted papers, typed on special forms, will be due 30
March 1985 and should be sent to the program chairman.  To make
referring possible it is important that the abstract summarize the
novel ideas, contain enough information about the scope of the work,
and include comparisons to the relevant literature.  Accepted papers
will appear in the Proceedings; those papers so recommended by the
reviewers will be considered for inclusion in a speacial issue of
Computational Intelligence, an international Artificial Intelligence
journal published by the National Research Council of Canada.
Presentation of papers at the Workshop will be at the discretion of
the program/organizing committee in order to maintain the focus and
workshop flavor of this meeting.  Information concerning local
arrangements will be available from the general chairman: Richard
Rosenberg.  Proceedings will be distributed at the workshop and
subsequently available for purchase.

------------------------------

End of AIList Digest
********************

∂08-Dec-84  0032	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #173    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 Dec 84  00:31:45 PST
Date: Fri  7 Dec 1984 22:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #173
To: AIList@SRI-AI


AIList Digest            Saturday, 8 Dec 1984     Volume 2 : Issue 173

Today's Topics:
  Journals - LISP Papers & Computational Intelligence,
  Brain Theory - Caenorhabditis Elegans,
  Cognition - Infant Amnesia & PBS and The Brain,
  Seminars - Speech & Language & Memory & Math Representation  (CSLI),
  Conference - Intelligent Systems and Machines
----------------------------------------------------------------------

Date: Fri 7 Dec 84 15:16:31-PST
From: Michael Georgeff <georgeff@SRI-AI.ARPA>
Subject: Journals for LISP papers


I wish to submit a paper on a new and efficient method for implementing
funargs in LISP (currently an SRI Tech Note) to an appropriate
journal.  Anyone know of any GOOD journal that publishes papers
on programming languages and implementations??

Michael Georgeff.

------------------------------

Date: Thu 6 Dec 84 16:58:08-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Computational Intelligence--New Journal

Computational Intelligence/Intelligence Informatique is a new journal which
will publish in English/French high-quality original theoretical or
experimental research in computational (aritificial) intelligence.  The
editors are Nick Cercone/Simon Fraser University and Gordon McCalla/Univer.
of Saskatchewan. Editorial board includes L. Bolc, A. Mackworth, A. Ortony,
R. Perrault, E. Sandewall, A. Sloman, n. Sridharan, D. Wilkins etc.
Subscription rates: U.S. $85 institutional, $47 personal.  It will be
a quarterly with the first issue to be available February 1985.
It will be published by the National Research Council of Canada and
sponsored by the Canadian Society for Computational studies of intelligence.
For more information: Distribution, R-88 (Computational Intelligence),
National Research Council of Canada, Ottawa, Ontario, Canada, K1A OR6.
Special rates for members of Canadian Societies.  Manuscripts should be
addressed to the editors, Computational Intelligence, Computing Science
Department, Simon Fraser University, Burnaby, British Columbia, Canada,
V5A 1S6.

I will be ordering this title for the Math/CS Library.  [...]

Harry Llull

------------------------------

Date: Thu 6 Dec 84 12:01:07-CST
From: ICS.DEKEN@UTEXAS-20.ARPA
Subject: brains, kludges, and elegance

The most substantial evidence of brain kludgery or lack thereof
curiously resides in the structure (completely mapped) of a nematode
named ... "elegant."

Caenorhabditis elegans has 302 neurons of 118 different types, which
make about 8000 synapses in total (each process synapses with about
50% of its neighbors).  The lineage of every one of these cells is
known, and the process by which neurological structures are formed may
well seem, to a computer scientist, a kludge.  Bilateral symmetry is
not produced, for example, in the "logical" mirror-image development
of a single precursor.  The word "kludge," though, carries a pejorative
connotation which seems inappropriate - there are multiple forces and
priorities at work. (One might similarly feel that "kludge" is not
the right word to describe democracy relative to totalitarianism.)

A better word, which may mean something to biologists and others
outside the hacker's ken, might be "fossiliferous," used to describe
any system (program or biological organism) which carries along the
baggage of its own trial-and-error evolution.  As Sulston, White,
Thomson, and Schierenberg put it:

        "... the perverse assignments, the cell deaths, the long-range
        migrations - all the features which could, it seems, be
        eliminated from a more efficient design - are so many
        developmental fossils."

(There is a three-part series on C. elegans in Science of 22 Jun,
6 Jul, and 13 Jul).

------------------------------

Date: Thu, 6 Dec 1984  01:59 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Infant Amnesia   V2 #165

The general evidence for infant memories is pretty poor.  For one
thing, as Ken Laws points out, there is something mighty suspicious
about those handfuls of memories each person claims.  In psychiatry
some of these are called "screen memories".  A very common feature is
to remember some scene, sort of "eidetically" -- but on questions, the
subject very often sees itself right in the center of the stage!
Since this is impossible, obviously, the conclusion is that the memory
is a construct.

What's worse, with careful questioning, one usually finds that the
memory has indeed been rehearsed, as Ken remarks, perhaps
periodically.  Presumably it has been reconstructed in the process,
too -- and can hardly be called a memory, but rather, an elaborated
theory or fantasy.

Finally, even more careful questioning is revealing: how do you know
that this was when you were three years old?  Oh, I'm sure of it.  It
was the day my dog was run over.  An innocent clue like that most
likely points to an incident of Freudian magnitude; a loss or death,
itself rehearsed perhaps for months, and then, unconsciously, for all
the rest of one's life.

In any case it is silly to haggle over the sharpness of the cutoff of
infantile amnesia.  I like theories like this: our experience is first
encoded in rather stupid ways; a square is seen as a line attached to
another line attached to another line, etc.  Like an early
assembly-language.  Later, a square is represented as "closed path of
equal lines" and, later, orthognal pairs of parallels, etc. -- going
to Fortrams to Pascals to LOGs to SMALLTalks to who-knows-what.  The
representatins and their interpreters grow more sophisticated, and
those first machine-languages of infancy just can't be always
upwards-compatible.  So, even if those early memories were not, in
fact, entirely ever lost, they're doomed to become
unintelligible, eventually.

------------------------------

Date: 1 Dec 1984 20:03:35 EST
From: HARTUNG@USC-ISI.ARPA
Subject: PBS & The Brain

Hello,
   I too have been watching the PBS series on the brain.  And while I find
it to be remarkably up to date, I do have a concern about it.  This is a
concern not just for this series but for many physiological explanations
of experiential phenomena presented to lay audiences.  When statements are
made that such and such an area of the brain is responsible for some known
effect, or that damage to location X results in some new and peculiar
observed behavior, these statements are (I fear) taken in a way they are
not meant to be.
   The lay audience has a different frame of reference than a psychologist or
neurophysiologist.  Scientists studying brain functions view their subjects
as complex models involving the interaction of a variety of known components:
neurotransmiters; ganglia; axon projections; structures, etc.  The majority
of the audiance has only limited exposure to these objects and concepts and
not enough time to really develop a similar framework to view all this new
knowledge in.  Instead I believe they do what people usually do when under-
standing new material and that is relate it to what they already know.  What
people already know is that their brain is responsible for their subjective
awareness of the world.  And as a result of the attempt to integrate knowledge
about the brain with the fact that it is the seat of subjective experience
there is a strong possibility that people will believe that these explanations
of brain functioning are in fact explanations of how it is that they have
an experiential component to their lives.
   Such physiological explanations will probably never supply the answer to
the question of how it is that we have the kind of experience of things that
we do.  For good argument on this point I refer you to Nagel's article in the
Oct. '74 Philsophical Review "What is it like to be a bat?".  But, lay
audiences are rarely if ever informed of this.
   Another point frequently skipped in presenting brain physiology to lay
audiences is the great importance of subjective experience in the functioning
of cognition.  (See Natsoulas, T.  Residual Subjectivity. American Psychologist
March 1978.)  Indeed subjectivity is so inseparable from cognition that it
raises serious questions about the capacity of digital machines to perform
the full range of human abilities, given that such digital machines may not
be able to achieve a subjective perspective (Searle, J. Minds, Brains, and
Programs.  Behavioral and Brain Sciences, Vol. 3, No. 3).  Arguments concerning
the mind brain problem have even come to doubt the capacity of present
scientific approaches to the study of mental phenomena and their relationship
to physical phenomena to have any success (Fodor, J.A. Methodological solipsism
considered as a research strategy in cognitive psychology.  Behavioral and
Brain Sciences Vol. 3 No. 3).
   I assume that the AI-LIST audience is aware of details of these arguments.
The television audience mostly is not.  I understand the reluctance of
television producers to include arguments as abstract and difficult as those
on the mind-brain problem.  Not to mention the fact that certain religious
groups find them upsetting.  However, I feel it is important for us who are
scientists, to encourage people consulting us about presentations they would
make to lay people, to provide the broadest possible context for our arguments,
and always to remember who our audience is and how different their perspective
may be.

                                Michael A. Moran
                                Lockheed Advanced Software Laboratory

address HARTUNG@USC-ISI

------------------------------

Date: Wed 5 Dec 84 21:28:17-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminars - Speech & Language & Memory & Math Representation 
         (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                ABSTRACT OF NEXT WEEK'S SEMINAR
      ``A Generalized Framework for Speech Recognition''

This talk will describe a framework for speaker-independent,
large-vocabulary and/or continuous speech recognition being developed at
Schlumberger (Fairchild).  The framework consists of three components:
  1) a finite-state pronunciation network which models relevant
     acoustic-phonetic events in the recognition vocabulary;
  2) a set of generalized acoustic pattern matchers; and
  3) an optimal search strategy based on a dynamic programming algorithm.
The framework is designed to accommodate a variety of (typically disparate)
approaches to the speech recognition problem, including spectral template
matching, acoustic-phonetic feature extraction and lexical pruning based
on broad-category segmentation.  A working system developed within this
framework and tailored to the digits vocabulary will also be described.  The
system achieves high recognition accuracy on a corpus spoken by
approximately 250 talkers from 22 ``dialect groups'' within the continental
United States.
                                                ---Marcia Bush
                        ←←←←←←←←←←←←

                ABSTRACT OF NEXT WEEK'S COLLOQUIUM
                      ``Data Semantics''

Abstract: There is a growing agreement of opinion that several semantic
phenomena can only be adequately dealt with in a theory which takes
partiality seriously, a theory of partial objects. There is no agreement
about what these partial objects are; for instance, whether they represent
``pieces of the world'' or ``states of partial information about the world.''
Yet, the choice of the perspective determines in large part the potential
of the theory.  I will discuss various aspects of Data Semantics, a theory
being developed by Frank Veltman and me, which takes the second
perspective as basic: the semantic behavior of several types of expressions
can best be understood if we take them to relate to our lack of information,
and regard them as patterns on how information can grow. I will argue that
problems concerning quantification and equality force us to distinguish
between different kinds of partial objects.
                                                        ---Fred Landman



                    F1 (AND F3) PROJECT MEETING

Title:     Self-propagating Search of Memory
Speaker:   Pentti Kanerva
Time/Date: Tuesday, December 11, 3:15 p.m.
Place:     Ventura Seminar Room

Abstract: Human memory has been compared to a film library that is indexed
by the contents of the film strips stored in it.  How might one construct
a computer memory that would allow the computer (a robot) to recognize
patterns and to recall sequences the way humans do?  The model presented
is a simple generalization of the conventional random-access memory of a
computer.  However, it differs from it in that (1) the address space is very
large (e.g., 1,000-bit addresses), (2) only a small number of physical
locations are needed to realize the memory, (3) a pattern is stored by
adding it into a SET of locations, and (4) a pattern is retrieved by POOLING
the contents of a set of locations.  Patterns (e.g., of 1,000 bits) are
stored in the memory (the memory locations are 1,000 bits wide) and they
are also used to address the memory.  From such a memory it is possible to
retrieve previously stored patterns by approximate retrieval cues--thus,
the memory is sensitive to similarities.  By storing a sequence of patterns
as a linked list, it is possible to index into any part of any "film strip"
and to follow the strip from that point on (recalling a sequence).
                         ←←←←←←←←←←←←

                       AREA C MEETING

Topic:     Theories of variable types for mathematical practice,
           with computational interpretations
Speaker:   Solomon Feferman, Depts. of Mathematics and Philosophy
Time/Date: 1:30-3:30 p.m., Wednesday, December 12
Place:     Conference Room, Ventura Hall

Abstract:  A new class of formal systems is set up with the following
characteristics:
   1) Significant portions of current mathematical practice (such as in
      algebra and analysis) can be formalized naturally within them.
   2) The systems have standard set-theoretical interpretations.
   3) They also have direct computational interpretations, in which all
      functions are partial recursive.
   4) The proof-theoretical strengths of these systems are surprisingly
      weak (e.g. one is of strength Peano arithmetic).
   Roughly speaking, these are axiomatic theories of partial functions and
classes.  The latter serve as types for elements and functions, but they
may be variable (or "abstract") as well as constant.  In addition, an element
may fall under many types ("polymorphism").  Nevertheless, a form of typed
lambda calculus can be set up to define functions.
   The result 3) gets around some of the problems that have been met with
the interpretation of the polymorphic lambda calculus in recent literature
on abstract data types.  Its proof requires a new generalization of the
First Recursion Theorem, which may have independent interest.
   The result 4) is of philosophical interest, since it undermines
arguments for impredicative principles on the grounds of necessity for
mathematics (and, in turn, for physics).
   There are simple extensions of these theories, not meeting condition 2),
in which there is a type of all types, so that operations on types appear
simply as special kinds of functions.



                           NL1 MEETING

Topic:      ``Association with Focus''
Speaker:    Mats Rooth
Time/Date:  2 p.m., Friday, December 7
Place:      Trailer Seminar Room
Note:       The content will overlap with but be non-identical to the
            presentation the speaker gave in the intonation seminar.

Abstract: In the context of adverbs of quantification, conditionals, and
``only,'' focus can have truth conditional significance.  Suppose Mary
introduced Bill and Tom to Sue and performed no other introductions.  Then
``Mary only introduced Bill to SUE'' is true, while ``Mary only introduced
BILL to Sue'' is false.  Similarly, ``MARY always takes Sue to the movies''
and ``Mary always takes SUE to the movies'' have different truth conditions.
My general claim is that focus influences truth conditions indirectly:  the
semantics of the constructions in question involve contextual parameters,
typically unspecified domains of quantification, which are fixed by a
focus-influenced component of meaning.  This idea is executed in a Montague
grammar framework.

------------------------------

Date: Fri, 7 Dec 84 10:36:07 EST
From: Morton A Hirschberg <mort@BRL-BMD.ARPA>
Subject: Conference - Intelligent Systems and Machines


                                CALL FOR PAPERS

                1985 Conference on Intelligent Systems and Machines

Dates:  April 23-24, 1985

Place:  Oakland University
        Rochester, Michigan

Technical papers reflecting both advances and applications in all aspects of
intelligent systems and machines will be considered.  Suggested topics include,
but are not restricted to:

     Intelligent Robotics, Machine Intelligence, C3I, Adaptive Control and
     Estimation, Visual Perception and Computer Vision, Pattern Recognition
     and Image Processing, Artificial Intelligence for Engineering Design,
     Intelligent Simulation Tools, Computer-Integrated Manufacturing Systems,
     Knowledge Representation, Expert Systems, Game Theory and Military
     Strategy, Interpretation of Multisensor Information, Automatic Message
     Understanding, Natural Language and Automatic Programming.

Authors are requested to submit a 300-500 word abstract by January 31, 1985 to:

     Professor Nan K. Loh, Conference Chairman,
     (313)377-2222

     Professor Christian Wagner, Technical Review Committee Chairman
     (313)377-2215

     Center for Robotics and Advanced Automation
     School for Engineering and Computer Science
     Oakland University
     Rochester, Michigan 48063

The conference will be preceded by tutorials on AI and Robotics held 22 April.

------------------------------

End of AIList Digest
********************

∂08-Dec-84  2332	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #174    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 Dec 84  23:32:03 PST
Date: Sat  8 Dec 1984 17:25-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #174
To: AIList@SRI-AI.ARPA


AIList Digest             Sunday, 9 Dec 1984      Volume 2 : Issue 174

Today's Topics:
  AI Tools - UNSW Prolog,
  Books - Pitman AI Series,
  Cognition - Childhood Memories,
  Expert Systems - Optical Disk Memories,
  Machine Translation - Folklore,
  Knowledge Representation - Nonverbal Meaning,
  Seminar - Reinforcement Learning  (CMU)
----------------------------------------------------------------------

Date: Thu, 6 Dec 84 13:08:58 PST
From: Adolfo Di-Mare <dimare@UCLA-LOCUS.ARPA>
Subject: UNSW Prolog

    Date: Mon 26 Nov 84 23:21:34-PST
    From: Michael A. Haberler <HABERLER@SU-SIERRA.ARPA>
    Subject: UNSW Prolog interpreter
    To: info-ibmpc@USC-ISIB.ARPA

I have ported the University of New South Wales Prolog interpreter to an
IBM PC running MS-DOS 2.0. It implements all built-in predicates of the
Unix version and can call your favorite editor or the command line inter-
preter.  UNSW Prolog is closely patterned after Prolog-10, but has no
compiler.

I got the permission to redistribute the interpreter from the author of
the Unix version, Claude Sammut of UNSW. If you want to obtain a copy,
sign the license which can be FTP'ed from [SIERRA]<HABERLER>PROLOG.LICENSE,
and send the license with 2 DSDD diskettes to the address below. Neither
Claude nor I charge anything for it.

Michael Haberler
Computer Systems Laboratory ERL 403
Stanford University, Stanford CA 94305
(415) 497-9503


        Adolfo
              ///

------------------------------

Date: Fri 7 Dec 84 17:32:57-EST
From: SRIDHARAN@BBNG.ARPA
Subject: Pitman AI series now a concrete reality!


Many of you know that Derek Sleeman and myself are the two Main
Editors for the Pitman AI research notes series.  The series was
conceived and developed over the past 18 months.  Some of you also saw
the Pitman booth at the AAAI-84 trade show.
Finally, the series appears to have taken concrete reality.  I have
received the first "book" in the series.  Another six books will be
out within the month.  THe first title is Perry Miller, A Critiquing
Approach to Expert Computer Advice: ATTENDING.
The other titles are listed below.
Paul Cohen, Heuristic Reasoning about uncertainty.
A. Palay, Searching with probabilities.
Y. Ohta, Knowledge based interpretation of outdoor natural color scenes.
R. Korf, Learning to solve problems by searching for macro-operators.
P. Poliltakis, Empirical Analysis for Expert Systems.
J. Kender, Shape from texture.

The series covers the whole spectrum of AI and publishes research
materials suitable for use in graduate courses, seminars, and
reference material for individuals working in this field.   The aims
of the series are (a) rapid publication in softback form;
(b) worldwide exposure for significant research results; and
(c) low cost - usually under $20.

Authors are encouraged to get in touch with one of the main editors
either Sridharan@BBNG or Sleeman@SUMEX.  [...]

------------------------------

Date: Fri, 7 Dec 84 21:30:31 est
From: utcsrgv!dciem!mmt@uw-beaver.arpa
Subject: Childhood memories

I have many memories dating back to as early as my 2nd birthday, and
can clearly remember large parts of the floor plan of the school I
attended from 3-5.  But ALL these memories are pictorial, not sequential.
I cannot remember happenings until about 5, when I remember my first
introduction to French verb conjugation.  Perhaps the truth is that
children are not capable of sequential logical operations until around 5,
and therefore cannot remember events of that kind, whereas pictures
are more readily preserved if you happen to grow up to be imagery-oriented.


Martin Taylor

------------------------------

Date: Thu 6 Dec 84 19:19:58-EST
From: Wayne McGuire <MDC.WAYNE%MIT-OZ@MIT-MC.ARPA>
Subject: Personal Assistants & Optical Disks

     Re: Dietz's speculation about optical disks:

     Optical disks will clearly impact information technology in
general (microforms, magnetic tape, commercial databases, book
publishing, etc.) and microcomputers in particular in many
revolutionary ways.  One potential use would be to integrate the
optical disk with AI-based integrated software in a microcomputer
product which would be a powerful general purpose idea processor and
personal assistant.

     We already see a trend towards general purpose idea processors in
such micro products as Framework, Symphony, Thinktank, Clout, Dayflo,
Factfinder, and The Desk Organizer.  This trend is likely to continue
and to accelerate as new generations of microprocessors rapidly come
online and make available ever greater random access memory for
personal computer users.  Framework and Symphony are the crude
precursors of general purpose personal assistant programs of 1MB, 5MB,
and more of memory.

     A sign of the times: Mitch Kapor, the founder of Lotus, recently
commented in an MIS Week interview that the next key step for his
company would be to explore current AI research in depth, and to
develop new more powerful products that were capable of sophisticated
qualitative, not just quantitative, information processing.

     Optical disks would nicely interface with the next generation of
general purpose idea processors.  With them one could easily store,
retrieve, and manipulate all the vital information and minute details
in one's life: financial transactions, notes for miscellaneous
projects, diary entries, address books, medical records, rough drafts,
datebooks, electronic mail, shopping lists, statistics, papers,
bibliographies, administrivia, programs in progress, graphs, abstracts
and full-text documents downloaded from commercial databases, etc.
Every individual record or key chunk of information in one's personal
digital archive could be uniquely identified by a date and time stamp,
and every personal database, structured and/or free-form, could be
integrated into a single richly interconnected knowledgebase.  The set
of storage optical disks for a program of this kind would constitute
for anyone, in compact and efficient form, an extremely thorough
journal of his or her life.

     Write-once optical disks would actually be preferable for this
archival purpose than disks which could be erased and written over.
Subsets from the master archival disk(s), of any desired information
or complex combination of records, could be transfered at will to
working floppy or hard disks.  The technology for the greatest
revolution in the history of personal information management is
already solidly in place.

     It is not likely that the total information processed by a
personal assistant for an average person over a lifetime would occupy
more than one or two disks.  Even for someone whose personal
information needs were much greater than average--say, a Harvard
economics professor who is a dedicated teacher, a prolific scholar and
author, holds a cabinet-level post (not concurrently with his teaching
responsibilities, of course), and has an active globe-trotting social
life--under 100 disks would probably neatly archive a lifetime of rich
intellectual, professional, and social activity.  Our professor would
be able to pinpoint in a few minutes those two sentences in which x
remarked about y in a private communication twenty years ago, or that
small note of last year which captured a flash of insight about how to
improve a formula in an econometric model of the Venezuelan oil
industry.  (Literary scholars analyzing the biodisks of future Walt
Whitmans or Virginia Woolfs would be able to reconstruct in
microscopic detail the evolution of their subjects' works and themes,
and the interaction of quotidian life events with their imaginative
creations.)

     AI-based personal assistants and optical disks seem to be made
for one another.  I wouldn't be surprised to see prototype products on
the market within the next two years.  By the '90s we may well wonder
how we ever got by without them.

-- Wayne McGuire (mdc.wayne@mit-oz)

------------------------------

Date: 7 Dec 84 16:22:46 EST
From: Allen <Lutins@RU-BLUE.ARPA>
Subject: more on translation...


I understand that a similar attempt with a Chinese/English translator
yielded the following results:

English input:  "Out of sight, out of mind"

Translated response: "Blind and Stupid"

I did have the occasion to "speak" with a Japanese student using a Sharp(?)
hand-held translator.  Surprisingly, general ideas were conveyed quite well.
However, I think we're still a long way off from getting a computer to
translate a language any better than an eight year old bilingual person can.

                                                -Allen

------------------------------

Date: Fri, 7 Dec 84 09:58:10 pst
From: Douglas young <young%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa>
Subject: Nonverbal meaning

Following my enquiry in AIList 62, a few people have asked what I
mean by "nonverbal meaning". It seems appropriate to reply to them,
and to explain to any others who may not understand the significance
of the term, through the medium of AIList.
   While until quite recently Wittgenstein, Frege, Quine, and Chomsky
might have seemed nearer than any other philosophers of language ( or
anyone else, for that matter) to providing a firm foundation from
which to represent meaning, none has been willing to go systematically
"deeper" than using words, ultimately, as that foundation. They have
written only of very unspecific and vague concepts and structures.
But Jerry Fodor's recent and exciting book, " The Modularity of Mind",
made, in my view, a major leap ahead, in at least recognising that meaning
is founded upon nonverbal, cognitive modules, although he did not suggest
either the exact form that such modules might take, nor just how they
could be applied to providing nonverbal meaning.
   We have been working here for several years on the theory and
foundations of a system by which word and sentence meaning could be
represented nonverbally in a natural language understanding system. The
principles of this system arose from clues derived from some neuro-
-physiological experiments I conducted during 1976-78 (in which recordings
were taken from the pulvinar complex, a part of the brain that in man
is involved in language but that also exists as far back the phylogenetic
tree as in the rabbit).During the following six years, further neurologic
and psychological research provided the detailed foundations of a system
by which we could represent the meaning(s) of any word or sentence, in
English (but that is essentially transportable to any other major
natural language), wholly nonverbally.
   Some of the neurological and psychological grounds, for both the
semantic and the syntactic base of the system were described in two
papers published in Medical Hypotheses in 1982,83, but, as I mentioned
in my previous communication, the original systems of modalities and
coding described in these papers were so long ago superseded that they
(but not the grounds) are of little significance now. We are currently
in the early stages of designing the software for a prototype of the
modal system, and some of the reults of this work should be published
during the latter part of 1985.
  In order to explain as concisely as possible the principles and some of
the techniques employed, it may be helpful to take people back to basics:
Try to explain by words alone the meaning of any one of a range of
different words (eg, MUG, DIFFERENTIATE, WALK, OR, INTERNATIONAL, QUICKLY).
You will succeed in providing several sets and trees of dictionary-type
definitions; but, in the end ,if you continue to ask yourself the meaning
of each new word in each succeeding set of definitions, you will either
get into an endless cycle of using the same words with which you began
your definitions, or you will reach an impasse. If, however, you then
ask yourself, and consider carefully, the subjectively experienced
nonverbal significances of those same words, several ideas will come to
mind. For example, in respect of MUG, you will likely notice the fact
that it has both aspects of "appearance" (such as its visual shape, or
the interorientation of its parts to one another) and of "function" (such
as the motor and kinaesthetic sequences of events that enable you to
drink from a mug).  The same kind of thoughts may also occur to you when
you consider a word like QUICKLY or UP, for example. Abstract words, like
the verb MATCH, and "long" words, like INTERNATIONAL, will require some
or many levels of verbal "unfolding" of their meanings in order for you
to be able to reach any of their nonverbal foundations; but these words
also can be nonverbally represented, by means of the "mental modalities".
In fact, all of these nonverbal aspects of meaning can be represented
by means of a whole range of modalities.
    The system incorporates 32 different modalities, of which 27 are
neurologically based (such as visual detection of movement (VDM), verbal
expression (VXP), kinaesthetic (KIN), central autonomic proprioception
and control (CAP)), and 5 are the mental modalities, for which there are
no neurological, only cognitive, grounds (such as cognitive mental acts
(CMA), metaconceptual (MET), emotive mental states (EMS)). Codes within
these modalities, grouped together as a frame of generic parts of function
and/or appearance, and closely interrelated, can provide a nonverbal
meaning representation for any word. The meaning of a sentence is provided
through an interactive syntactic process that, both anteroactively and
retroactively, interrelates appropriate segments of those modal code
frames, so as to disambiguate both the individual word meanings and their
"use-categories" (i.e., "object", "activity", "characteristic", or "relation").
By this method, it is possible to represent the meaning(s) of any sentence
nonverbally, and at the same time provide access, up to any depth required
of a particular system application, to direct and associated knowledge re
that sentence.
   The modal system seems both versatile and quite powerful, and it has
the advantage over some other systems of NLU that it reduces memory and
storage requirements by taking advantage of the many cognitively equivalent
modal aspects in descriptions of similar objects, activities or
characteristics. One rather satisfying aspect of the mental modalities
is that the cognitive mental act modality not only provides for the
nonverbal meanings of such words as ASSOCIATE, NEGATE, SYMBOLIZE, MATCH
or CONJUNCT, but also provides the means of executing the relevant logical
activity. Incidentally, another feature of the system is that it can
provide for both metaphor and idiom; but work on this will almost
certainly be delayed until 1986 due to the need to complete the basic
system software for the prototype.
  It would be inappropriate in the AIList to do more than try to provide
with sufficient background an idea of the general characteristics of the
system. I hope, however, that what I have written will be sufficient to
explain at least what sort of thing I am referring to by "nonverbal meaning"
As mentioned in AIList 62, I would be most interested to hear about, and/or
to receive copies of any papers from other projects  in this or any allied
area of natural language understanding.

      Douglas A,Young
      Dept of Computer Science
      University of Manitoba
      Winnipeg
      Manitoba, R3T 2N2
      CANADA

------------------------------

Date: 7 Dec 84 11:48:09 EST
From: Steven.Shafer@CMU-CS-IUS
Subject: Seminar - Reinforcement Learning  (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Richard Sutton, from U. Mass., will be speaking at next Tuesday's AI
Seminar.  WeH 5409 at 3:30 pm.  If you'd like to speak with him
during his visit, please contact Geoff Hinton.

REINFORCEMENT LEARNING:  LEARNING METHODS FOR COMPLEX SYSTEMS

   Reinforcement learning is the process of learning to make decisions
based on the observed results of previous decisions.  It is
distinguished from other forms of machine learning in that it does not
require instruction as to what the learning system should do, only
evaluation of what it does do.  In this sense reinforcement learning
requires less help from its environment and is more powerful and robust
than other forms of learning.  In complex learning systems it is
particularly difficult to specify in detail what the learning system
should do, and reinforcement learning is particularly relevant.

   Nevertheless, reinforcement learning has been studied very little.
This talk will present computational experiments comparing the
performance of many previously-studied algorithms and several new
ones.  In many cases the previously proposed algorithms were found to
perform very poorly, much worse than the new algorithms.  Since in many
cases the new algorithms are only slightly different from the old,
these results suggest that the space of possible reinforcement learning
algorithms is mostly unexplored.  Among the previously-studied
algorithms compared are those due to Minsky, Rosenblatt, Farley and
Clark, Widrow, Samuel, and Michie and Chambers.  The most sophisticated
of the new algorithms appears to be a refinement and generalization of
the algorithm used in Samuel's celebrated checker-player to modify and
improve its static evaluation function.

   This talk will emphasize (1) the difference between reinforcement
learning and other basic forms of learning which have already been
thoroughly studied, (2) the demonstration of improvement over
previously-studied methods, and (3) areas of possible application of
reinforcement learning methods.

------------------------------

End of AIList Digest
********************

∂11-Dec-84  1203	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #175    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Dec 84  12:03:35 PST
Date: Tue 11 Dec 1984 09:54-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #175
To: AIList@SRI-AI.ARPA


AIList Digest            Tuesday, 11 Dec 1984     Volume 2 : Issue 175

Today's Topics:
  AI Tools - Tapes on LM & XLISP Availabilty,
  AI News & Expert Systems - Recent Articles & Machine Poker,
  AI Tools - Parallel Processing and OPS5,
  Humor - Lardware & History of Computing Qual,
  Seminars - Connection Language for Parallel Computers  (MIT) &
    Instructionless Learning  (CMU)
  Course - Sets and Processes  (SU)
----------------------------------------------------------------------

Date: 10 Dec 1984 at 1125-EST
From: jim at TYCHO.ARPA  (James B. Houser)
Subject: Tapes on LM

Hi
        We just got a new  "industry  standard"  9-track  tape  drive  from
Symbolics  for our 36??.  Has anyone worked out how to convert tape formats
so you can interchange with other processors?  We are especially interested
in LMI and UNIX.

                                Cheers

                                        Jim

------------------------------

Date: Mon, 10 Dec 84  3:52:54 EST
From: "Martin R. Lyons" <991@NJIT-EIES.MAILNET>
Subject: XLISP availabilty


     Does anyone have the C source of XLISP laying around?  Our copy we had
forwarded to us was destroyed when we had a system crash.  I believe it was
version 1.2, but this is a best guess.

     If anyone has any information regarding other public domain LISPs written
in C I would appreciate pointers as to who to contact, etc.  to get a copy.

     As always, thanks in advance...

 MAILNET: Marty@NJIT-EIES.Mailnet
 ARPA:    Marty%NJIT-EIES.Mailnet@MIT-MULTICS.ARPA
 USPS:    Marty Lyons, CCCC/EIES @ New Jersey Institute of Technology,
          323 High St., Newark, NJ 07102    (201) 596-2932
 "You're in the fast lane....so go fast."

------------------------------

Date: Sat, 8 Dec 84 06:42:05 cst
From: Laurence Leff <leff%smu.csnet@csnet-relay.arpa>
Subject: AI News


Datamation December 1 1984 Page 172
Ovum ltd. announces their report "The commerical Application of Expert
Systems Technology."  It costs $395 and is available from Ovum Ltd.,
14 Penn Rd. London N7 9RD, England (including air mail).


Byte, December 1984
Page 412 - Ad: Walt Lisp for CP/M for $169.00.  It is substantially
compatable with Franz Lisp and similar to MacLisp.  $169.00
1-800-LIP-4000 from ProCode International 15930 SW Colony Pl.
Portland, Or 97224

Page 355: Review of micro-Prolog: Available from
Programming Logic Systems 31 Crescent Drive, Milford, CT 06460


Electronic News, December 3 1984
Page E
Symbolics has signed a contract valued at > $3,000,000 to supply 50 3600
Series Lisp Machines to Carnegie Group Inc.

Page 44
Announcement of Inforite Tablet which recognizes hand printed characters,
graphics and sketches.

------------------------------

Date: Fri 7 Dec 84 17:49:26-EST
From: SRIDHARAN@BBNG.ARPA
Subject: Excerpt from "games" mag

From the Jan 85 issueof GAMES p6-7
"How do you beat a poker player blessed with the supreme poker face?
That's one of the problems that will confront the winner of a $100,000
poker tournament to be held this month at the Bicycle Club in Bell Gardens,
California.

Whoever takes the event's high-draw competition must face a poker-playing
computer named ORAC in a head-to-head, no-limit game of draw poker.  ORAC
was developed by Mike Caro [Why is the program called ORAC?.. nss]
a top Las Vegas poker pro and computer whiz.  Not only is ORAC programmed
to beat people, it is also capable of explaining in English the strategy
used.

ORAC has not had an easy life thus far.  Its first trial by fire was last
April at the 1984 World Series of Poker in Las Vegas, where it played a
heads-up game against the then reigning world champion of poker, Tom
McEvoy. Though ORAC normally generates its own cards, a human dealer
was used at the World Series to allay any suspicion of cheating.  The
computer read its hand with a special optical scanner similar to the ones
used in supermarket checkout counters.

Man and machine played just about dead even for three-quarters of an hour
until ORAC moved all its chips in with an ace-queen of diamonds against
McEvoy's ace-nine off suit.  (the game was hold'em, a variation of
seven-card stud).  McEvoy held by far the worst hand, but he was lucky
enough to draw a pair of 9's and claim victory.  Commented the world champ:
"The fact that the computer went in with the best hand and got drawn
out on proves it's only human" [Hmm.!]

... The computer has proved itself a world-class competitor.  As for
the upcoming match at the Bicycle Club, Caro is full of confidence:
"ORAC will not only win," he says sanguinely, "but immediately afterwards,
it will write its own press release, explaining its actions during the match."

------------------------------

Date: 11 December 1984 0140-EST
From: Joseph Kownacki@CMU-CS-A
Subject: Parallel Processing and OPS5

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

OPS3/CM* is a facility on CM* which can execute OPS3/OPS5 programs in
parallel. The current version is not a complete system, but it is capable
of executing a repesentative subset of the TicTacToe program in parallel.

This post is a request for OPS5 test programs, especially those of moderate
size, which demonstrate (or counter-demonstrate) the usefulness of parallel
processing in this application.  A complex version of TTT or 8-puzzle
would be immediately handy.

Any assistance or suggestion in obtaining such examples would be greatly
appreciated.  Could you also disseminate this message forward to any other
people or groups that may be helpful.

You can obtain background information on OPS3/CM* from my Plan file -
just finger J. Kownacki.

------------------------------

Date: Fri, 7 Dec 84 13:12:29 cst
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: Special purpose hardware

There are still some open questions regarding the optimality of
Buell's sorting malgorithm (generate all N! possible permutations of the
N items to be sorted and then test each permutation to see if it is the
sorted result).  Nevertheless, the malgorithm does offer some interesting
properties when one considers the possibility of using an array of
parallel processors to implement the malgorithm.  One can show that
an array of N numbers can be sorted in constant time by an N by N!
array of processors and a data memory of the same size plus an
auxiliary memory that consists of one bit per processor.

We divide the set of processors into N! one-dimensional arrays of
N elements. Each of these arrays is reponsible for generating and
testing one of the N! permutations of the items to be sorted.

In the first step, each of the N processors in each array loads one
of the items to be sorted and stores it at a predetermined location
to generate the permutations.  In the second step, each processor
compares two neighboring items in its permutation and sets a bit
in the auxiliary memory if the items are in the proper order.  Finally,
one of the processors in each array examines its set of N bits in
the auxiliary memory to determine which of the permutations is
in the proper order.

A nice feature of this architecture is that it readily extends
to support descendents of the malgorithm, such as that recently
suggested by Lee and Brownson.

Question: if bad algorithms are called malgorithms, what should we call
architectures designed to implement malgorithms?  Cross suggests lardware.

------------------------------

Date: 10 Dec 84 20:06:06 EST
From: Ed.Frank@CMU-CS-UNH
Subject: History of Computing Qual

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

It's clear from an earlier post [on the CMU bboard, asking about
memory cores,] that there is a need in the department for a qual on
the history of various aspects of computing anachronisms, trivia, etc.
Such a qual will be given on Black Friday, at 3pm in the lounge.
Following standard practice, the syllabus will not be available until
next semester, the qual will not be pretested, and many of the
questions will be unclear.  This qual will cover all four areas.  Send
questions for inclusion in the qual to me and I'll forward them to the
History of Computing Qual committee. Anyone interesting in being on
the qual committee should also send me mail.

Some sample questions (Please don't send me the answers to these
questions. Just send me more questions.):

Computing Systems:
What's a drum card?

Programing Systems:
Describe a technique for getting a computer into an infinite
loop without ever executing a branch instruction. Name a machine
with this feature.

Theory
Describe the fundamental difference between Eniac and the Manchester
Mark I.

AI
What do CAR and CDR mean? On what machine?

------------------------------

Date: 7 Dec 1984  16:35 EST (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Connection Language for Parallel Computers  (MIT)


             [Forwarded from the MIT bboard by SASW@MIT-MC.]

               === === === AI REVOLVING SEMINAR === === ===

                               ALan Bawden

                        A Programming Language for
                       Massively Parallel Computers

                                    or

                      Help Stamp Out "Pointerthink"!


        Wednesday, December 12, 1984    4:00pm  8th floor playroom

The notion of a "pointer" is built deeply into many modern programming
languages.  Pointers are routinely used as the cement to build complex data
structures, even where other mechanisms would suffice, because on
conventional sequential computers they are cheap and their hazards are easy
to control.  Unfortunately the pointer is expensive and clumsy to support
on a massively parallel computer.  The notion of a "connection" will be
offered as a suitable substitute for the pointer.  Connections are a
minimal mechanism to allow communication; they are more constrained than
pointers and are less of a hazard in a parallel environment.  Most uses of
pointers are trivial enough that connections can be used instead.  This
makes it feasible to construct a programming language using connections,
instead of pointers, as the primitive cement for building data structures.

There are many consequences of making the switch from pointers to
connections.  Due to the symmetry of the connection mechanism, the concepts
of "object" and "type" become exact duals of the concepts of "message" and
"operation".  The notion of "state" emerges not as an aspect of objects,
but as an aspect of the interface between processes.  The problems of
method inheritance in a Flavor-like system are revealed to be even nastier
than previously suspected.  The "futures" mechanism, popular among parallel
programming languages, emerges as a natural consequence of the connection
mechanism.

------------------------------

Date: 8 December 1984 2230-EST
From: Jeff Shrager@CMU-CS-A
Subject: Seminar - Instructionless Learning  (CMU)

             [Forwarded from the CMU bboard by Laws@SRI-AI.]

                        Instructionless Learning
                  A Proposal for Dissertation Research

                              Jeff Shrager

                        Department of Psychology
                       Carnegie-Mellon University

        On: Friday December 14
        At: 10:30am-Noon
        In: Baker Hall 336B

We investigate the mechanisms of instructionless learning by asking
undergraduates to "figure out" a programmable toy, without instructions or
advice. From protocols, we obtain learners' hypotheses and the behaviors that
they exhibit which lead to learning a schema for the device.  Behaviors
include performing hypothesis testing experiments, explorations of various
aspects of the device and the incomplete schema, and solving problems to
exercise the schema. The present proposal is to construct and
validate a theory of instructionless learning of the BigTrak.  The theory
includes mechanisms of hypothesis formation, experimental test construction,
and overall learning control.  This work advances theories of concept
learning in complex realistic domains; mental models of complex systems, in
particular their acquisition; and cognitive modelling and its validation.

[Copies of the proposal are available in the Psych Lounge.]

------------------------------

Date: 07 Dec 84  0845 PST
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Course - Sets and Processes  (SU)

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]



                      SETS AND PROCESSES


             MATH 294 (PHIL 394) WINTER QUARTER.
                       COURSE ANOUNCEMENT

             provisional time: Fridays, 1.15---3.15.

The standard universe of well-founded sets can be completed in a
natural way so as to incorporate every possible non-well-founded set.
The new completed  universe will still model all the axioms of set
theory except that the foundation axiom must be replaced by an
anti-foundation axiom.  The first part of the course will be concerned
with this new axiom, its model and its consequences. Several
interesting variants of the axiom will also be examined.

The second part of the course will be concerned with an axiomatic
approach to a general notion of abstract sequential process.  These
processes are capable of interacting with each other so that a variety
of operations for their parallel composition will be available.  The
notion is intended to form the foundation for an approach to the
semantics of programming languages involving concurrency.  A model for
the axiom system can be extracted from recent work of Robin Milner.
But by using the anti-foundation axiom a simple purely set theoretic
model will be given.

Some familiarity with the axiomatic theory of sets and classes will be
presupposed.  An understanding of the notion of a class model of ZFC
will be needed.  Definition by recursion on a well-founded relation
and Mostowski's collapsing lemma will be relevent.  But topics such as
the constructible universe, forcing or large cardinals will NOT be
needed. Some familiarity with computation theory would be useful.

Underlying the model constructions in both parts of the course is a
general result whose apreciation will require some familiarity with
the elements of universal algebra and category theory.

Background references will be available at the start of the course.

Auditors are very welcome.  The course may be of interest to both
mathematicians and computer scientists.


                                           PETER ACZEL

------------------------------

End of AIList Digest
********************

***** Arrow at Line 370 of 369 ***** Page 175 of 175 ***** 26R 451X *****
∂13-Dec-84  1448	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #176    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Dec 84  14:48:22 PST
Date: Thu 13 Dec 1984 12:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #176
To: AIList@SRI-AI.ARPA


AIList Digest           Thursday, 13 Dec 1984     Volume 2 : Issue 176

Today's Topics:
  AI Companies - Survey,
  Machine Translation - Folklore & Aymara',
  Linguistics - Language Deficiencies,
  Humor - Nondeficient Christmas Tidings,
  Conferences - Machine Translation & JASIS Call for Papers
----------------------------------------------------------------------

Date: Thursday, 13 December 1984 00:24:23 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: Information about AI companies

I am trying to put together a survey of various tools available in the
market for AI work. In particular I am interested in an assessment of the
tools (user experiences). Also would appreciate any information about the
kind of systems that AI companies are building.

sriram@cmu-ri-cive.arpa

------------------------------

Date: 10 Dec 84 11:37 EST
From: Gocek.henr@XEROX.ARPA
Subject: Re: Automatic Chinese translation

I remember the story about "Out of sight, out of mind" differently.  The
phrase was translated into Chinese and then retranslated into English.
The result was "invisible idiot".  Again, the person requesting the
translation was a government official.

Gary Gocek (Gocek.Henr@Xerox.ARPA)


[I first heard it as "blind idiot".  -- KIL]

------------------------------

Date: Sun, 9 Dec 1984  16:14 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Translation Folklore   V2 #174

I'm getting sick of hearing those two stories: "Blind and Stupid,"
and "The drinks were good but the meat was rotten."


It is time for you computer-people to start being serious!  Those
stories are only folklore, and did not come from the
machine-translation milieu at all; they circulated long before
computers, and were invented by cynic to make fun of bad human
translators!

If you think about it for a minute, you will realize that none of the
old translating machines were even nearly subtle enough to make such
coherent mistakes!  Modern ones are only a little better, and probably
not quite up to that standard yet.

Has anyone heard of a genuine translation blunder by a working
translation machine -- that is, one which is bad enough to be
considered really funny?  I consider a few of the paraphrases
produced by FRUMP to be in that class.

------------------------------

Date: Tue 11 Dec 84 09:46:43-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Aymara'

I just ran across an article in the S.F. Sunday Examiner and Chronicle
by Peter McFarren, Associated Press, Sept. 23, 1984, p. A17.  Most of
the content has been published in AIList already, but the following
may be new.

"Atamiri [Guzman de Rojas' program] is 10 times faster than any of the
others," said Bill Page, a computer specialist at the International
Research Center in Ottawa, Canada.  The center published Guzman de
Rojas' first study of Atamiri's potential in 1980, and Wang has just
offered him a $50,000 grant and a $100,000 computer to refine his
system.

The creator of Atamiri hopes to expand its vocabulary from the current
3,000 to 8,000 words per language [English, French, German, Portugese,
and Spanish] to about 30,000.  Then, he says, it will be possible to
translate prosaic texts such as newspaper articles with about 90 percent
accuracy.  Literary translations would come later, but human translators
will always have to be around to make corrections.

                                        -- Ken Laws

------------------------------

Date: Tue, 11 Dec 84 16:21:58 pst
From: ucdavis!lakhota@Berkeley (Lakhota)
Subject: Language deficiencies (AI List Digest 2:167,168,172)


   The interesting discussion of possible language deficiencies was triggered
by two anecdotes, one involving Australian Aborigines and the other American
Indians.  It would be useful in this context to look at some empirical facts
relating to these languages.  Australian languages are perfectly capable of
forming conditional and hypothetical expressions.  Examples of languages with
references follow below:

Dyirbal - Dixon, THE DYIRBAL LANGUAGE OF NORTH QUEENSLAND. CUP, 1972.
Tiwi - Osborne, THE TIWI LANGUAGE. Australian Institute of Aboriginal Studies
       [AIAS], Australian Aboriginal Studies no. 55, Linguistic Series no. 21,
       1974.
Walmatjari - Hudson, THE CORE OF WALMATJARI GRAMMAR. AIAS, 1978.
Guugu Yimidhirr - Haviland, Guugu Yimidhirr. In Dixon & Blake (eds.),
       HANDBOOK OF AUSTRALIAN LANGUAGES [HAL], v. 1. John Benjamins, 1979.
Djapu - Morphy, Djapu, a Yolngu dialect. HAL, v. 3. John Benjamins, 1983.
Yukulta - Keen, Yukulta. HAL, v. 3.
Nunggubuyu - Heath, A FUNCTIONAL GRAMMAR OF NUNGGUBUYU. Humanities Press, 1984.

   The same holds true for American Indian languages.  It is worth mentioning
that there is now more on Hopi than Whorf's papers.  E. Malotki has written two
700 page books on Hopi concepts of space and time: HOPI TIME, Mouton, 1983, and
HOPI RAUM (not yet translated into English).  These volumes should lay to rest
speculation about what Hopi does and doesn't have.  Examples of American Indian
languages and references follow:

Nootka - Sapir & Swadesh, NOOTKA TEXTS. LSA, 1939.
Yokuts - Newman, YOKUTS LANGUAGE OF CALIFORNIA. VFPA 2, 1944.
Cree - Wolfart, PLAINS CREE: A GRAMMATICAL SKETCH.  Trans. APS, 1973.
Takelma - Sapir, The Takelma Language of Southwestern Oregon. HANDBOOK OF
      AMERICAN INDIAN LANGUAGES [HAIL], v. 2 (BBAE 40, 2), 1922.
Tunica - Haas, TUNICA. HAIL, v. 4. J.J. Augustin, 1940.
Uto-Aztecan - Langacker, AN OVERVIEW OF UTO-AZTECAN LANGUAGES. STUDIES IN
      UTO-AZTECAN GRAMMAR, v. 1. SIL Publ. in Ling. 56, 1977.

   There are hundreds of Aboriginal and American Indian languages, and these
are only a handful of examples.  Nevertheless, they illustrate the point that
these languages do have the capacity for forming conditional, counterfactual,
and hypothetical expressions.  If anyone desires any further references, I'd
be happy to supply them.

   Robert Van Valin (ucdavis!lakhota@BERKELEY)
   Linguistics, UC Davis

------------------------------

Date: Wed, 12 Dec 84 10:40:02 pst
From: Peter Karp <karp@diablo>
Subject: Christmas Tidings

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


                        A VISIT FROM ST. NICHOLAS
                        -------------------------

Twas the nocturnal segment of the diurnal period preceding the annual
yuletide celebration, and throughout our place of residence, kinnetic
activity was not in evidence among the possessors of this potential,
including that species of domestic rodent known as Mus musculus.
Hosiery was meticulously suspended from the forward edge of the wood
burning caloric apparatus, pursuant to our anticipatory pleasure
regarding an immiment visitation from an eccentric philanthropist
among whose folkloric appellations is the honorific title of St.
Nicholas.

The prepubescent siblings, comfortably ensconced in their respective
accommodations of repose, were experiencing subconscious visual
hallucinations of variegated fruit confections moving rhythmically
through their cerebrums.  My conjugal partner and I, attired in our
nocturnal head coverings, were about to take slumbrous avantage of
the hibernal darkness when upon the avenaceous exterior portion of
the grounds there ascended such a cacaphony of dissonance that I felt
compelled to arise with alacrity from my place of repose for the
purpose of ascertaining the precise source thereof.

Hastening to the casement, I forthwith opened the barriers sealing
this fenestration, noting thereupon that the lunar brilliance without,
reflected as it was on the surface of a recent crystalline
precipitation, might be said to rival that of the solar meridian
itself-- thus permitting my incredulous optical sensory organs to
behold a miniature airborne runnered conveyance drawn by eight
diminutive specimens of the genus Rangifer, piloted by a minuscule,
aged chauffeur so ebullient and numble that it became instantly
apparent to me that he was indeed our anticipated caller.  With his
ungulate motive power travelling at what may possibly have been more
vertiginous velocity than patriotic alar predators, he vociferated
loudly, expelled breath musically through contracted labia, and
addressed each of the octet by his or her respective cognomen - "Now
Dasher, now Dancer..." et al. - guiding them to the uppermost exterior
level of our abode, through which structure I could readily
distinguish the concatenations of each of the 32 cloven pedal
extremities.

As I retracted my cranium from its erstwhile location, and was
performing a 180-degree pivot, our distinguished visitant achieved -
with utmost celerity and via a downward leap - entry by way of the
smoke passage.  He was clad entirely in animal pelts soiled by the
ebon residue from oxidations of carboniferous fuels which had
accumulated on the walls thereof.   His resemblance to a street vendor
I attributed largely to the plethora of assorted playthings which he
bore dorsally in a commodious cloth receptacle.

His orbs were scintillant with reflected luminosity, while his
submaxillary dermal indentations gave every evidence of engaging
amiability.  The capillaries of his malar regions and nasal
appurtenance were engorged with blood which suffused the subcutaneous
layers, the former approximating the coloration of Albion's floral
emblem, the latter that of the Prunus avium, or sweet cherry.  His
amusing sub- and supralabials resembled nothing so much as a common
loop knot, and their ambient hirsute facial adornment appeared like
small, tabular and columnar crystals of frozen water.

Clenched firmly between his incisors was a smoking piece whose gray
fumes, forming a tenuous ellipse about his occiput, were suggestive of
a decorative seasonal circlet of holly.  His visage was wider than it
was high, and when he waxed audibly mirthful, his corpulent abdominal
region undulated in the manner of impectinated fruit syrup in a
hemispherical container.  He was, in short, neither more nor less than
an obese, jocun, multigenarian gnome, the optical perception of whom
rendered me risibly frolicsome despite every effort to refrain from so
being.  By rapidly lowering and then elevating one eyelid and rotating
his head slightly to one side, he indicted that trepidation on my part
was groundless.

Without utterance and with dispatch, he commenced filling the
aforementioned appended hosiery with various of the aforementioned
articles of merchandise extracted from his aforementioned previously
dorsally transported cloth receptacle.  Upon completion of this task,
he executed an abrupt about-face, placed a single manual digit in
lateral juxtaposition to his olfactory organ, inclined his cranium
forward in a gesture of leave-taking, and forthwith effected his egress
by renegotiating (in reverse) the smoke passage.  He then propelled
himself in a short vector onto his conveyance, directed a musical
expulsion of air through his contracted oral sphincter to the antlered
quadrupeds of burden, and proceeded to soar aloft in a movement
hitherto observable chiefly among the seed-bearing portions of a
common weed.  But I overheard his parting exclamation, audible
immediately prior to his vehiculation  beyond the limits of
visibility:  "Ecstatic yuletide to the planetary constituency, and to
that selfsame assemblage, my sincerest wishes for a salubriously
beneficial and gratifyingly pleasurable period between sunset and
dawn."

-- From Eleonore Johnson at Teknowledge

------------------------------

Date: Tue, 11 Dec 84 00:06 EST
From: Sergei Nirenburg <nirenburg%umass-cs.csnet@csnet-relay.arpa>
Subject: Conference - Machine Translation


               CALL  FOR  PAPERS

CONFERENCE ON THEORETICAL AND METHODOLOGICAL ISSUES

   IN MACHINE TRANSLATION OF NATURAL LANGUAGES

              Colgate  University
              Hamilton  NY  13346
              August 14-16, 1985

The program of the conference will be biased toward invited lectures and
panel discussions.  However, a restricted number of excellent submitted
papers will be also included.

The major topics of the conference are as follows :

-- Machine Translation (MT) as an application area for Theoretical
   Linguistics (including stylistics and discourse analysis)

-- MT as an application area for Artificial Intelligence (including the
   choice of the representation schemata for MT)

-- Theory and methodology of translation and machine translation

-- Sublanguages, restricted domains and MT

-- MT as a case study in software system development

-- Computational tools for MT, human engineering aspects,
   management and evaluation of MT projects.


The papers should not exceed 3,000 words, should contain a 250-word abstract
and a list of index terms.  Send them (and address all inquiries) to

Sergei Nirenburg
MT Conference Program Chair
Department of COmputer Science
Colgate University
Hamilton  NY  13346
(315) 824-1000 x586

Every paper will be read by two members of the program committee whose
members are:

Christian Boitet, University of Grenoble
Jaime Carbonell, Carnegie-Mellon University
David MacDonald, University of Massachusetts
James Pustejovsky, University of Massachusetts
Allen Tucker, Colgate University
Don Walker, AT&T Laboratories

The emphasis of the conference is on the theoretical and methodological
issues.  Therefore, the papers that do not address such issues will not
be considered.

Dates: Submission deadline        -- March 11, 1985
       Notification of acceptance -- May 15, 1985
       Final version due          -- June 17, 1985


>>>>> the above will provide a good opportunity to conduct more lively
>>>>> discussions of Aymara, Sastric Sanskrit, Esperanto, etc., the
>>>>> problem of translatability, theory of translation (even not
>>>>> necessarily automatic), interlinguae and their structure...

------------------------------

Date: Tue, 11 Dec 84 11:20:41 cst
From: Don Kraft <kraft%lsu.csnet@csnet-relay.arpa>
Subject: JASIS Call for Papers

As the new editor of the JOURNAL OF THE AMERICAN SOCIETY FOR
INFORMATION  SCIENCE  (JASIS),  I  am sending out a call for
papers.  We are  a  refereed  professional  journal  seeking
scholarly, relevant articles in the area of information sci-
ence.  To submit an article, please send three copies of the
manuscript to me at

     Donald H. Kraft
     Department of Computer Science
     Louisiana State University
     Baton Rouge, LA  70803.

If you have any questions, I can also be reached at
(504) 388-1495 or
kraft%lsu@csnet-relay  .

I have attached below a list of topics considered  relevant.
Please  note  the presence of artificial intelligence, which
has become of interest, especially in the area  of  informa-
tion  retrieval (intelligent front ends, expert systems, and
the use  of  natural  language  processing  seem  especially
relevant  to  my  readers  at  the moment).  You may wish to
check out the September, 1984 (v. 35,  n.  5)  issue,  which
featured a series of articles on AI.


                  CALL FOR PAPERS -- JASIS

1. Theory of Information Science          4. Applied Information Science

   Foundations of Information Science        Informations systems design --
   Information theory                            tools, principles, applications
   Bibliometrics                             Case histories
   Information retrieval --                  Information system operations
      models and principles                  Standards
   Evaluation and measurement                Information technology -- hardware
   Representation, organization, and             and software
       classification of information         Automation of information systems
   ARTIFICIAL INTELLIGENCE and natural       Online retrieval systems
       language processing                   Office automation and records
                                                 management

2. Communication                          5. Social and Legal Aspects of
                                                 Information
   Theory of communication
   Non-print media                           Impact of information systems and
   Man-machine interaction                       technology upon society
   Network design, operation, and            Ethics and information
       management                            Legislative and regulatory aspects
   Models and empirical findings about       History of information science
       information transfer                  Information science education
   User and usage studies                    International issues

3. Management, Economics, and Marketing

   Economics of information
   Management of information systems
   Models of information management decisions
   Marketing and market research studies
   Special clientele -- arts and humanities,
        behavioral and social sciences, biological
        and chemical sciences, energy and environment,
        legal, medical, and education.


Authors may also  send in  brief  communications,  scholarly
opinion pieces, and even letters to the editor. In addition,
we also have a fine book review section.

Thank you in advance for your consideration of JASIS.

Don Kraft
kraft%lsu@csnet-relay

------------------------------

End of AIList Digest
********************

∂13-Dec-84  1927	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #177    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Dec 84  19:27:06 PST
Date: Thu 13 Dec 1984 17:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #177
To: AIList@SRI-AI.ARPA


AIList Digest            Friday, 14 Dec 1984      Volume 2 : Issue 177

Today's Topics:
  AI in Engineering - SIGART Special Issue,
  Expert Systems - Micro Survey & Poker & Personal Assistants,
  Planning - Constraint Propagation and Planning,
  Report - Reflection and Semantics in LISP
  Humor - Scientific Method,
  Seminars - Three-valued Hintikkian Epistemic Logic  (CSLI) &
    The Sequential Nature of Unification  (IBM-SJ)
----------------------------------------------------------------------

Date: Thursday, 13 December 1984 00:28:33 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: SIGART special issue on AI in Engineering

The deadline for submissions of abstracts for the special issue is extened
to  January 15th  for all Arpanet mailers. For more information on this
issue see SIGART newsletter dated July 1984.  All submissions should be sent
to rj@cmu-cs-h.arpa.

Sriram

------------------------------

Date: Tue, 11 Dec 84 16:05:23 mst
From: "Arthur I. Karshmer" <arthur%nmsu.csnet@csnet-relay.arpa>
Subject: Expert systems


I am interested in obtaining information about expert systems that
run on micro computers and software to develop expert system on
micro processors. We are currently using a variety of micro's including
IBM PC'c and IBM-AT's.

Arthur I. Karshmer
arthur.nmsu@csnet-relay

------------------------------

Date: 12 Dec 84 10:35:28 EST
From: Jeffrey Shulman <SHULMAN@RUTGERS.ARPA>
Subject: ORAC's Poker Game

        This past weekend (Sunday 12/9) "Ripley's Believe It or Not" had a
segment on ORAC's poker game.  You should try to catch it in rerun.

                                                        Jeff

------------------------------

Date: Wed, 12 Dec 84 17:28:04 EST
From: David←West%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: McGuire's Speculations on Personal Assistants (v2 #174)

   -Of course, the biodisks of a future Walt Whitman would be
exhaustively analyzed not by a future Louis Untermeyer, but
by the latter's automated personal assistant, and the resulting
voluminously definitive biography would be read and enjoyed by the public's
personal assistants.  Thus we would all be freed from untold
drudgery, to fulfil the vision of Villiers de L'Isle-Adam (1890):
  "Living? Our servants will do that for us."         :-)

------------------------------

Date: 12 Dec 84 13:30:24 EST
From: Louis Steinberg <STEINBERG@RUTGERS.ARPA>
Subject: Constraint Propagation and Planning

A recent message from chandra@uiucuxc@uiucdcs@RAND-RELAY.ARPA asked
about people working on Constraint Propagation and Planning ala
Stefik's MOLGEN.

The AI/VLSI Project at Rutgers is using this approach in building a
system to do design.  Our thesis is that:
        Design = Top Down Refinement + Constraint Propagation
Our current system aids in the design of digital VLSI circuits, but we
believe the ideas apply to the design of other kinds of things as
well.  Design and the sort of planning chandra was talking about are
essentially the same problem, although there are some peculiar things
about blocks-world style domains that make planning/design issues a
bit different than they are in design of circuits or, to some extent,
programs.

The only paper I can point you to on our design stuff is:

        Mitchell, Steinberg, and Shulman, "A Knowledge Based Approach to
        Redesign", Proceedings of IEEE workshop on Principles of Knowledge
        Based Systems, Denver, December 3-4, 1984

Also, many of our ideas flow from previous work on REdesign and on
constraint propagation in circuits - see, for instance:

        Steinberg, L. and Mitchell, T., "A Knowledge Based Approach to
                VLSI CAD", Proceedings of 21st Design Automation
                Conference, June, 1984.

        Kelly, V.  "The CRITTER System: Automated Critiquing of Digital
                Circuit Designs", Proceedings of 21st Design Automation
                Conference, June, 1984.

        Mitchell, T., Steinberg, L., Kedar-Cabelli, S., Kelly, V., Shulman,
                J., Weinrich, T., "An Intelligent Aid for Circuit Redesign",
                Proceedings of the National Conference on Artificial
                Intelligence, 1983, pp. 274-278.

        Kelly, V., and Steinberg, L., "The CRITTER System:  Analyzing Digital
                Circuits by Propagating Behaviors and Constraints",
                Proceedings of the National Conference on Artificial
                Intelligence, 1982, pp. 284-289.  Also Report LCSR-TR-30,
                Dept. of Computer Science, Rutgers University.

        Mitchell, T., Steinberg, L.,  Smith, R. G., Schooley, P.,  Kelly, V.,
                and  Jacobs,  H.,  "Representations  for  Reasoning  about
                Digital Circuits," Proceedings of the Seventh International
                Joint Conference on Artificial Intelligence, 1981, pp. 343-344.

------------------------------

Date: Wed 12 Dec 84 17:54:25-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Report - Reflection and Semantics in LISP

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                   NEW EDITION OF CSLI REPORT NO. 8

The final edition of Report No. CSLI--84--8, entitled ``Reflection and
Semantics in LISP'' by Brian Smith, has now been published. Copies
of this report may be obtained by writing to Dikran Karagueuzian at CSLI.

------------------------------

Date: Thu, 6-Dec-84 00:42:44 PST
From: reid@Glacier.ARPA
Subject: Scientific Method

        [Forwarded to the Xerox bikers' bboard by Trow.PA@XEROX.]
        [Forwarded to the Xerox bboard by Ayers.PA@XEROX.]
        [Forwarded from the Xerox bboard by PolleZ.PA@XEROX.]
        [Forwarded to the Stanford bboard by Jock@SU-SCORE.]
        [Forwarded from the Stanford bboard by Laws@SRI-AI.]


Subject: net.bicycle.freewheel.cleaning: a reprise

As avid readers of this group may remember, we had a big row about cleaning
freewheels this summer, which was sort of ended when Fred at Varian, who is an
analytical chemist, and me, Brian at Stanford, who is a professor of CS, got
into a disagreement about something having to do with chemistry and Brian at
Stanford had the rare sense to keep his mouth shut.

However, despite being merely a computer scientist, and being quite willing to
work out of doors where the fumes won't kill him as fast, Brian remained
slightly unconvinced that the chemicals suggested by Fred at Varian were in
fact better at cleaning freewheels than the junk currently used by Brian at
Stanford. Brian had this vague suspicion that Fred the Chemist from Varian had
been exposed to lectures telling him to stay away from the kind of toxic
chemicals that Brian liked to use to clean freewheels, in much the same way
that Brian the CS professor lectures his students to stay away from Fortran
and IBM PC's.

So Brian went out in the rain and did some experiments. Actually, he had
another attack of good sense and stayed on his back porch, where the rain did
not fall directly on his head or on his freewheels or into his chemicals.

Now here a problem developed. Computer Scientists do not customarily do
experiments. Computer Scientists normally just say things because it makes
them feel good, and if they say them loudly and brashly enough then the things
become true. The current U.S. 5th generation computer project is a good
example of this.

But Brian at Stanford was once a physics major at the University of Maryland,
and he remembered how to run experiments after some consultation with his old
Physics 171 lab notebooks. The gist of it seemed to be that you were supposed
to do something twice, and the second would be identical to the first in every
way except for one controlled variable, and then if there were any differences
you could chalk them up to that variable. I think you're supposed to do a Chi
Square test in there too, or maybe draw some graphs, but this was just an
amateur experiment.

As the light dawned, Brian realized that he could do this experiment using
some hardware that was near and dear to his hacker's heart.  Brian's wife had
given him a birthday present consisting of a real mother of a power saw, a
Milwaukee worm drive power saw, with a finetooth carbide blade. That saw is
just the cat's meow--you put the carbide blade on it, put on the requisite eye
and lung protectors, and wow, you can rip up anything you can reach.  Joe-Bob
Briggs would be thrilled. The same feeling that you get when you first run
some code on a Cray, that feeling of almost limitless power, can be had much
more cheaply with a Milwaukee worm drive saw with a good carbide blade.

In particular, a Milwaukee worm drive saw with a carbide blade will saw a
freewheel clean in half. Lots of wild sparks shooting everywhere, but since
it's raining they probably won't set very much on fire. Ball bearings getting
caught in the carbide teeth and being whipped around at 200 mph and shot
across the yard, scaring the squirrels. Oh, this was great fun.

After counting his fingers and finding them all still intact, Brian took these
two demi-freewheels and stuck them in two old margarine tubs, which are one of
the principal tools of the serious amateur freewheel cleaner.  Brian got out a
beaker (after all, this was an experiment, right?  Experiments use beakers)
and measured out a beakerful of Berryman's Carburetor Cleaner [brian's
favorite toxic chemical for cleaning freewheels].  This beakerful didn't cover
the freewheel much, because it was a 60ml beaker, so then Brian poured a bunch
of glugs of Berryman's on top of the freewheel, until it was immersed. Brian
figured he would face the issue of how to clean the beaker and return it to
his kitchen at a later time.  The label on the Berryman's can says it contains
Methylene Chloride, Cresylic acid, and Perchloroethylene.

Into the other margarine tub Brian put the other half of the freewheel, and
then poured out a bunch of glugs of "Gunk" brand degreasing liquid. The label
on the Gunk can says it contains Petroleum Distillates.

Brian is sufficiently afraid of Berryman's Carburetor Cleaner that he didn't
want to go messing with it by stirring it or sticking a brush into it, but it
was quite clear to Brian from the moment this experiment started that the Gunk
was going to need some help, so he took an acid brush and used it to scrub
parts of the surface of the freewheel that was soaking in Gunk.

Brian then went to eat a chicken chimichanga (hold the sour cream) and came
back about 20 minutes later to inspect the results of the experiment.

The result was that there was no grease on either freewheel half, but there
was still a pile of rust and black goop and garbage on the Gunk half, though
not as much in the places where it had been brushed. The Berryman's Carbuetor
Cleaner half was as clean as a new whistle, gleaming metal. A dead insect of
some sort was floating in the Berryman's, busily dissolving.

Brian longed for the skills of a real physical scientist--to weigh these
bisected freewheels on a microbalance, or look at them under high-powered
microscopes, or grind them up and feed them to a mass spectrometer, but none
of these machines were in evidence in the back yard, so instead he just washed
them off with soap and water and looked at them under a bright light.

What he saw is that the Berryman's Carburetor Cleaner gets freewheel halves
(and therefore, presumably, freewheels) really really clean, by dissolving or
decaying or disintegrating the grease and the rust and the insects.  And that
the Gunk gets the grease off of freewheels, and if you scrub it will get the
dirt off, but it leaves the rust behind.

The moral of this story seems to be that if you are a responsible freewheel
owner and you clean it as often as it wants to be cleaned and you avoid
letting it get built up with dirt and you keep it out of the rain, all of
which are good things to do to a freewheel, that Gunk degreaser (or other
similar chemicals) works just fine. But if you let your freewheel go too far,
to get to the point where if it were teeth you know your dentist would give
you a long lecture about flossing, that you should clean it with some sort of
toxic waste such as Berryman's Carburetor Cleaner (which has been found "more
effective" in scientific experiments at a major university.....)

        Brian Reid      Reid@SU-Glacier.ARPA    decwrl!glacier!reid

------------------------------

Date: Wed 12 Dec 84 17:54:25-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Three-valued Hintikkian Epistemic Logic  (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                 SUMMARY OF LAST WEEK'S NL1 SEMINAR
             ``Three-valued Hintikkian Epistemic Logic''
                          By Lauri Carlson

Hintikka's system of epistemic logic in K&B and Models for Modalities
contains a number of peculiar features (restricted range feature,
treatment of irreducible existential formulae) which skew the natural
interpretation of certain formulae and make it hard to ascertain
completeness of the system(s).  For instance the formula (x)(Ey)Kx=y
is valid (and does not mean I "know who everyone is"), while
(Ex)(Ey)(x=y & -Kx=y) is inconsistent (and does not mean "There is
someone who might be two different people as far as I know").  Lauri
Carlson presented a version of epistemic logic which overcomes these
difficulties and can be shown complete with respect to its intended
Kripkean style semantics.

------------------------------

Date: Thu, 13 Dec 84 09:35:13 PST
From: IBM San Jose Research Laboratory Calendar
      <calendar%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - The Sequential Nature of Unification  (IBM-SJ)

                 [Forwarded from the SRI-AI bboard.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193


  Mon., Dec. 17 Computer Science Seminar
  2:00 P.M.   ON THE SEQUENTIAL NATURE OF UNIFICATION
  Audit. A     Unification of terms is a crucial step in resolution
            theorem proving with applications to a variety of
            symbolic computation problems.  It will be shown that
            the general problem is log-space complete for P, even
            if infinite substitutions are allowed.  Thus, it is
            "popularly unlikely" that unification can enjoy
            substantial speed-up in a parallel model of
            computation.  A fast parallel (NC) algorithm for term
            matching, an important subcase of unification, will
            also be presented.  This talk assumes no familiarity
            with unification or its applications.

            Dr. C. Dwork, Massachusetts Institute of Technology,
                Laboratory for Computer Science
            Host:  J. Halpern

  [...]

------------------------------

End of AIList Digest
********************

∂16-Dec-84  1507	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #178    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Dec 84  15:07:31 PST
Date: Sun 16 Dec 1984 13:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #178
To: AIList@SRI-AI.ARPA


AIList Digest            Sunday, 16 Dec 1984      Volume 2 : Issue 178

Today's Topics:
  Linguistics - Nonverbal Semantics
----------------------------------------------------------------------

Date: Fri, 14 Dec 84 13:13:04 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: Nonverbal Semantics  [Long Message]


    >It's . . . convenient to talk about natural language as if
    >it's something "on its own".  However, I view this attitude
    >as scientifically unhealthy, since it leads to an
    >overemphasis on linguistic structure.  Surely the
    >interesting questions about NL concern those cognitive
    >processes involved in getting from NL to thoughts in memory
    >and back out again to language.  These processes involve
    >forming models of what the speaker/listener knows, and
    >applying world knowledge and context.  NL structure plays
    >only a small part in these overall processes, since the main
    >ones involve knowledge application, memory interactions,
    >memory search, inference, etc.

                Dyer V2 #160

    >Bravo, Dyer!  As you suggest, there is indeed much to learn
    >from the study of natural language -- but not about "natural
    >language itself"; we can learn what kinds of manipulations
    >and processes occur in the under-mind with enough frequency
    >and significance that it turns out to be useful to signify
    >them with surface language features.

    >. . . All that is very fine.  We should indeed study
    >languages.  But to "define" them is wrong.  You define the
    >things YOU invent; you study the things that already exist.
    >. . . But when one confuses the two situations, as in the
    >subjects of generative linguistics or linguistic competence
    >-- ah, a mind is a terrible thing to waste, as today's
    >natural language puts it.

                Minsky V2 #162


I suspect that the antipathy to natural-language parsers, grammars, and
theories that we often encounter in AI literature reflects a healthy
revulsion from the excesses of generative linguistics.  In all of its
many schizmatic forms, generative grammer posits, as the secret inner
mechanism of language, one of various language-like systems that share
historical roots with programming languages, and uses natural-language
data only in a fragmentary and anecdotal way to advance or refute the
latest version.  These systems can be quite hairy, but I am convinced
that the hair is mostly inside the heads of the theorists.

Any natural phenomenon, or any artifact of human culture, is a
legitimate object of study.  Natural language is both an artifact of
human culture, and a natural phenomenon.  There are some who are
studying language, as opposed to the grammatical machinery of
language-like systems.

I recently reviewed a book by the linguist from whom Noam Chomsky
learned about linguistic transformations (among other things).  It will
appear in AJCL vol. 10 nos. 3 and 4 (a double issue).  The following
excerpt gives an outline of the model of language he has developed:

          I refer to the Harrisian model of language as `constructive
          grammar' and to the Harrisian paradigm for linguistics as
          `constructive linguistics'.  A constructive grammar has
          at least the following six characteristics:

          1    The semantic primes are words in the language, a base
               vocabulary that is a proper subset of the vocabulary of
               the language as a whole.

          2    Generation of sentences in the base is by word entry,
               beginning with entry of (mostly concrete) base nouns.
               The only condition for a word to enter is that its
               argument requirement must be met by some previously
               entering word or words, generally the last entry or
               entries, which must not already be in the argument of
               some other word.  The base vocabulary has thus a few
               simple classes of words:

               N         base nouns with null argument
               On, Onn   operators requiring base nouns as arguments
               Oo, Ooo   requiring operators as arguments
               Ono, Oon  requiring combinations of operators and base
                         nouns
               [NOTE:  these are intended to be O with subscripts]

               This does not exhaust the base vocabulary.  In addition
               to these, almost all of the operators require
               morphophonemic insertion of `argument indicators' such
               as -ing and that.  (These were termed the `trace' of
               `incremental transformations' in Harris 1965 and 1968.)

          3    The base generates a sublanguage which is
               informationally complete while containing no
               paraphrases.  This is at the expense of redundancy and
               other stylistic awkwardness, so that utterances of any
               complexity in the base sublanguage are unlikely to be
               encountered in ordinary discourse.  As in prior reports
               of H's work, base sentences are all assertions, other
               forms such as questions and imperatives being derived
               from underlying performatives I ask, I request, and the
               like.

          4    A well-defined system of reductions yields the other
               sentences of the language as paraphrases of base
               sentences.  The reductions were called the
               `paraphrastic transformations', and `extended
               morphophonemics' in earlier reports.  They consist of
               permutation of words (movement), zeroing, and
               morphophonemic changes of phonological shape.  Each
               reduction leaves a `trace' so that the underlying
               redundancies of the base sublanguage are
               recoverable. Linearization of the operator-argument
               dependencies--in English either `normal' SVO or a
               includes much of what is in the lexicon in generative
               `topicalizing' linear order--is accomplished by the
               reduction system, not the base.  The reduction system
               includes much of what is in the lexicon in generative
               grammar (cf. Gross 1979).


          5    Metalinguistic information required for many
               reductions, such as coreferentiality and lexical
               identity, is expressed within the language by conjoined
               metalanguage sentences, rather than by a separate
               grammatical mechanism such as subscripts.
               Similarly, `shared knowledge' contextual and pragmatic
               information is expressed by conjoined sentences
               (including ordinary dictionary definitions) that are
               zeroable because of their redundancy.  [Harris's book
               Mathematical Structures of Language (Wiley 1968) shows
               that the metalanguage of natural language necessarily
               is contained within the language.]

          6    The set of possible arguments for a given operator (or
               vice-versa) is graded as to acceptability.  These
               gradings correspond with differences of meaning in the
               base sublanguage, and thence in the whole language.
               They diverge in detail from one sublanguage or
               subject-matter domain to another.  Equivalently, the
               fuzzy set of `normal' cooccurrents for a given word
               differs from one such domain to another within the base
               sublanguage.

               In informal, intuitive terms, a constructive grammar
          generates sentences from the bottom up, beginning with word
          entry, whereas a generative grammar generates sentences from
          the top down, beginning with the abstract symbol S.  The
          grammatical apparatus of constructive grammar (the rules
          together with their requirements and exceptions) is very
          simple and parsimonious.  H's underlying structures, the
          rules for producing derived structures, and the structures
          to be assigned to surface sentences are all well defined.
          Consequently, H's argumentation about alternative ways of
          dealing with problematic examples has a welcome concreteness
          and specificity about it.

               In particular, one may directly assess the semantic
          wellformedness of base constructions and of each intermediate
          stage of derivation, as well as the sentences ultimately
          derived from them, because they are all sentences.  By
          contrast, in generative argumentation, definitions of base
          structures and derived structures are always subject to
          controversy because the chief principle for controlling them
          is linguists' judgments of paraphrase relations among
          sentences derived from them.  Even if one could claim to
          assess the semantic wellformedness of abstract underlying
          structures, these are typically so ill-defined as to compel us
          to rely almost totally on surface forms to choose among
          alternative adjustments to the base or to the system of rules
          for derivation.  And as we all know, a seemingly minor tweak
          in the base or derivation rules can and usually does have
          major and largely unforeseen consequences for the surface
          forms generated by the grammar.

This model of language offers an interesting approach to the problem
brought up by Young in V2 #162, 174: how to represent the meaning of
words without (circularly) using words?

Most approaches amount to what I call `translation semantics':  having
found a set of language-universal semantic primes, one translates
sentences of a given natural language into those primes and, voila'!,
one has represented the `meaning' of those NL sentences.

Let us ignore the difficulty of finding a set of semantic universals (a bit
of hubris there, what!).  The `representation of meaning' is itself a
proposition in a more-or-less artificial language that has its own
presumably very simple syntax (several varieties of logic are promoted
as most suitable) and--yes--its own semantics.  Logics boil `meaning'
down to sets of `values' on propositions, such as true/false.

`But my system', rejoins Young, `uses actual nonverbal modalities, it
has real hooks into the neurological and cognitive processes that human
beings use to understand and manipulate not only language, but all other
experience as well'.  That may be.  It does beg the question to what
degree cognitive processes and even neurological processes are molded by
language and culture.  (In Science 224:1325-1326 Nottebohm reports that
the part of the forebrain of adult canaries responsible for singing
becomes twice as large coincident with (a) increased testosterone and
(b) learning of songs.  This is the same whether the testosterone
increase is annually in the Spring or experimentally, and the latter
even in females, who consequently learn to sing songs as if they were
males.  Vrenson and Cardozo report in Brain Research 218:79-97
experiments indicating that both the size and shape of synapses in the
visual cortices of adult rabbits changed as a result of visual training.
Cotman and Nieto-Sampedro survey research on synapse growth and brain
plasticity in adult animals in Annual Review of Psychology 33:371-401.
Roger Walsh documents other research of this sort in his book Towards an
Ecology of Brain.  Conventional wisdom of brain science, that no new
neurons are formed after infancy, is unwise.)

The padres of yore surveyed the primitive languages around their
missions and found so many degenerate forms of Latin.  Their grammars
lay these languages on the procrustean bed of inflections and
declensions in a way that we see today as obviously ethnocentric and
downright silly.  We run the same risk today, because like those padres
we cannot easily step out of the cultural/cognitive matrix with which we
are analyzing and describing the world.  Ask a fish to describe water:
the result is a valid `insider's view', but of limited use to nonfish.

Mr. Chomsky characterized his mentor in linguistics as an Empiricist and
himself as a Rationalist, and in the Oedipal struggle which ensued
mother Linguistics has got screwed.  Given that systems based on
constituent analysis are inherently overstructured, with layers of
pseudo-hierarchy increasingly remote from the relatively concrete words
and morphemes of language, an innate language-learning device is
ineluctable:  how else could a child learn all of that complexity in so
short a time on so little and so defective evidence?  The child cannot
possibly be an Empiricist, she must be a Rationalist.  Given a
biologically innate language-acquisition device, there must be a set of
linguistic universals that all children everywhere come into the world
just knowing, and all languages must be specialized realizations of
those archetypes--phenotypes of that genotype, as it were.  (Chomsky did
not set out to `define' natural language but to explain it.  It is
principally because his `underlying', `innate' constructs have a
connection to empirical data that is remote at best--rather like the
relation of a programmer's spec to compiled binary--that they appear
to be (are?) definitions.)

But consider a model in which the structure of language is actually
quite simple.  Might the characteristics of that model not turn out to
be those of some general-purpose cognitive `module'?  I believe Harris's
model, sketched above, presents us this opportunity.

Now about Jerry Fodor's book The Modularity of Mind, which Young mentions.
The following is from the review by Matthei in Language 60.4:979,

        F presents a theory of the structure of the mind in which two
        kinds of functionally distinguishable faculties exist:
        `vertical' faculties (modules) and `horizontal' faculties
        (central processes). . . . F identifies the modules with the
        `input systems, whose function is to interpret information
        about the outside world and to make it available to the central
        cognitive processes.  They include [five modules for] the
        perceptual systems . . . and [one for] language. . . .

        The central processes, as horizontal faculties, can be
        functionally distinguished from modular processes because their
        operations cross content domains.  The paradigm example of their
        operation is the fixation of belief, as in determining the truth
        of a sentence.  What one believes depends on an evaluation of
        what one has seen, heard, etc., in light of background
        information. . . .

                . . . the condition for successful science is that
                nature should have joints to carve it at:  relatively
                simple subsystems which can be artificially isolated and
                which behave, in isolation, in something like the way
                that they behave in situ. (128)

        [The above, by the way, suggests that, while studying language
        in isolation--severing its `joints' with other systems--may be
        of limited interest to AI researchers seeking to model language
        users' performance, rather than their competence, it is not
        `scientifically unhealthy'.  It also points to the central
        problem of semantics, as Matthei points out . . .]

        Modules, F says, satisfy this condition; central processes do
        not.  If true, this is bad news for those who wish to study
        semantics.  The burden which F puts on them is that they must
        demonstrate that computational formalisms exist which can
        overcome the problems he enumerates.  These formalisms will have
        to be invented, because F maintains that no existing formalisms
        are capable of solving the problems.

I, too, feel that notions of modules and modularity, or at least Fodor's
attempt to consolidate them, make a great deal of sense.  However, the
caveat about the study of semantics underscores my contention that
semantics properly must be based on an `acceptability model': a body of
knowledge stated in sentences in the informationally complete
sublanguage of Harris's base, whose acceptability is known.  This is
akin to a `truth model' in aletheic approaches to semantics in logic.
It is also very simply conceived of as a database such as is constructed
by Sager's LSP systems at NYU.  We should note that the sentences of
this base sublanguage correspond very closely across languages (cf. e.g.
the English-Korean comparison in Harris's 1968 book), and that the
vocabulary of the base sublanguage is a subset of that of the whole
language (allowing for derivation, morphological degeneracy, and the
like), much closer to Young's categories than the vocabulary with which
he expresses so much frustration.

There is one pointer I can give to another version of `translation
semantics' that probably satisfies Young's sense of `nonverbal':
Leonard Talmy developed an elaborate system for representing the
semantics and morphological derivation of some pretty diverse languages
in his (1974?) PhD dissertation at UC Berkeley.  The languages included
Atsugewi (neighbor and cousin to the Native American language I worked
on), Spanish, and Yiddish.  He went to SRI after graduation, but I have
no idea where he is now or what he is doing.

        Bruce Nevin (bn@bbncch)

------------------------------

End of AIList Digest
********************

∂19-Dec-84  1435	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #179    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Dec 84  14:35:29 PST
Date: Wed 19 Dec 1984 11:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #179
To: AIList@SRI-AI.ARPA


AIList Digest           Wednesday, 19 Dec 1984    Volume 2 : Issue 179

Today's Topics:
  AI Tools - Micro-PROLOG & SmallTalk AI Systems,
  Applications - Expert Legal Systems & Intelligent Skimmer,
  Planning - Constraint Propagation and Design,
  Reports - SEAI Publications,
  Politics - Visitors from USSR,
  Lab Description - NRL,
  Workshop - Logic and Computer Science
----------------------------------------------------------------------

Date: Mon, 17 Dec 1984  17:36 EST
From: Chunka Mui <CHUNKA%MIT-OZ@MIT-MC.ARPA>
Subject: micro-PROLOG info request


We are looking for PROLOG packages which run on micros, especially the
IBM PC.  If you are familiar with any PROLOG interpreters for the PC,
especially one with a tutorial package, I appreciate any information
that you could give me.

Thanks,

Chunka Mui
Chunka%mit-oz@mit-mc

------------------------------

Date: 17 Dec 84 12:21:53 EST
From: Mike.Rychener@CMU-RI-ISL2
Subject: SmallTalk AI systems?

      [Forwarded from the CMU bboard by Laws@SRI-AI.]

Does anyone know of any successful AI applications coded in SmallTalk?
This was stimulated by the new Tektronix AI machine, whose blurb touts
its SmallTalk as useful for developing expert systems.

------------------------------

Date: 17 Dec 1984 12:21-EST
From: Alexander.Hauptmann@CMU-CS-G.ARPA
Subject: expert legal systems?

I am looking for references to publications about expert systems for
legal reasoning. If you know of anybody who has done work in this area,
please let me know (Alexander.Hauptmann@CMU-CS-G.ARPA). Among other
things, I have heard that Roger Schank has done work in this area, but have
been unable to find citations. Thanks.

                                        Alex.

------------------------------

Date: 17 Dec 84 06:44:45 EST
From: Robert.Thibadeau@CMU-CS-H
Subject: expert legal system

      [Forwarded from the CMU bboard by Laws@SRI-AI.]

Extensive work in legal reasoning was done by Thorne McCarty.  Thorne published
in the Harvard Law Review back in 1977ish.  His topic was legal reasoning
in corporate tax law -- one of the areas where the Supreme Court effectively
makes the law.  Thorne, educated at Harvard in Law and Stanford in AI,
and a tenured professor of Law at Rochester, evaluated Yale, way back,
but decided to do his AI work at Rutgers.  While I regard Roger Shank as
absolutely excellent, I find it unfortunate that like natural language
understanding systems vis a vis Yorick Wilks, belief systems vis a vis
Chuck Schmidt and N. Sridharan, Memory vis a vis 100 years of thought
in German and British psychology, we find now Roger implied at the leading
edge in legal reasoning.  Roger does good work, but he takes a long time
to see the light and he tends to ignore his surround.
I would hope the people on the frontiers not be forgotten this time around.

------------------------------

Date: 17 Dec 84 16:05:04 EST
From: BIESEL@RUTGERS.ARPA
Subject: Intelligent skimmer suggestion.

As the volume of mail in this and other lists increases I find that
I spend more and more time only skimming the text, searching for the
message or two that is of interest to me. It occurs to me that an
intelligent program for skimming text would be of some help in this.

This program would scan a message, break up its sentences into
grammatical tokens, and would first display only nouns and verbs - in
their correct places on the screen. As the text scrolls upward adjectives,
adverbs and pronouns appear, and by the time the text has traversed
2/3 of the screen, all words in each sentence are filled in. A smarter
system would also keep track of the rate at which CTL-s/CTL-Q is sent,
and adjust its transfer rate accordingly. A really smart program would
keep track of keywords in those pieces of text which the user actually
reads, determined by how often he slows down the skimming presentation,
and would automatically present more fleshed out versions of messages
which contained such keywords.

There is no good reason why text has to be displayed in a letter-
sequential form. We have a whole 2-D array to work with; let's try
to use it to enhance rather  than obfuscate communication.

Biesel@rutgers

------------------------------

Date: Saturday, 15 December 1984 03:46:38 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: Planning, Constraint Propagation and Design

A part of the January 1983 SIGART newsletter was dedicated to Planning.
A number of abstracts on (then) current  research was compiled by
Ann Robinson.

I would like to add the following to Steinberg's equation about design:

   Heuristic Knowledge  (HK) +  Well-structured Programs (Algorithms) (WP)
                           = Good Engineering Programs (GEP)

If we add Causal knowledge (CK) to the LHS of above equation then we have

 HK + WP + CK = EEP (Effecient Engineering Programs)

Any comments?

Has anyone tried the task suspension method instead of constraint
propagation? Task suspension works in the following manner (there is more to
it):
   IF a constraint in a certain part of the design cannot be satisfied
   THEN suspend that task and get the values needed to satisfy the constraint
In other words if you are designing Module-1 and find that there is a
constraint relating Module-1 to Module-2 then suspend the task of performing
Module-1 and design that part of Module-2 which satisfies the constraint.
I tried this in structural design [1] using a  Hearsay-type approach.
However, I ran into problems when a constraint involved interaction
between 3 or more components.

[1] ALL-RISE : A Case Study in Constraint-Directed Design, Working Paper,
               Department of Civil Engineering, C-MU, Pittsburgh, PA 15213

Sriram

------------------------------

Date: Tue 18 Dec 84 10:47:20-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: SEAI Publications

A brochure from the SEAI Institute has crossed my desk.  They are offering
a two-volume survey of commercial and near-commercial AI systems as of
August 1984.  The two 200-page surveys, AI Applications for Manufacturing
and AI Applications for Business Management, include 136 products and
in-house systems at over 100 corporations, including 28 expert-system
toolkits and 10 natural-language systems.  The reports are $110 each, or
$200 together.  SEAI also offers a three-volume set on Machine Vision for
Robotics and Automated Inspection and several other reports on robots
in industry, AI, expert systems, and automated guided vehicle systems.
You can contact them at Box 590, Madison, GA 30650, (404) 342-9638.


[Note: I have no connection with the company, and pass this along only in
the hopes that it will be of use to the Arpanet or AI research communities.
I obviously cannot report on every AI book offered by every publisher, but
see no harm in forwarding book reviews or notices about obscure reports.
Correspondence about this policy should be directed to AIList-Request@SRI-AI.
-- KIL]

------------------------------

Date: Tue, 18 Dec 84 21:34:41 PST
From: Judea Pearl <judea@UCLA-LOCUS.ARPA>
Subject: Visitors from USSR

 I wish to share with the readers of the AI-Digest this
letter, which I wrote to Professor Viktor V. Aleksandrov,
Head, Leningraad Research Computer Center, who is currently
visiting the U.S. and who is particularly interested in
meeting AI researchers.


Dear Professor Alexandrov,

           I would have liked very much to meet you during
your current visit to UCLA, but the following circumstances
will not allow me to do so in good faith:

           I have received from the Association of Computing
Machinery (ACM) a long list of Soviet computer scientists who,
for the past several years, have been barred from scientific
activity and have been denied permission to participate
in scientific meetings, domestic as well as international.
Some of these people would like to present papers at the
International Joint Conference on Artificial Intelligence
which will take place at UCLA, August 1985, but will be
prevented from leaving your country.

           I am particularly familiar with the stories of:

                 Alexander Lerner, Moscow
                 Isai Goldstein, Tbilisi
                 Gregory Goldstein, Tbilisi

whom I met at the International Joint Conference on
Artificial Intelligence - 1973, Tbilisi, Georgia, and with
whom I tried to keep in touch. To my dismay, I find
these three cited in the 1984 Report of the ACM
Committee on Scientific Freedom and Human Rights as being
harrassed and prevented from engaging in scientific
activities. In 1973, I personally witnessed
Isai Goldstein being barred from entering the lecture hall of
the Tbilisi conference, so I feel obliged to express my
concern that today, eleven years later, the method of
professional deprivation is still practiced in your country.

        Although I would like to contribute to improved
scientific cooperation between our two countries, my
understanding has been that a prerequisite to true
cooporation is the freedom for individuals to engage in
scientific pursuits and to communicate their findings to
other scientists.  Your government apparently has a
different perception of cooperation, and I will be happy to
discuss with you these differences. However, because you are
an official Soviet visitor, I cannot meet with you in good
faith to engage in a purely professional discussion. To do
so would be to betray Professor Lerner, who personally
pleaded with me to refrain from participation in U.S.-USSR
cooperative programs until minimum standards of scientific
freedom are agreed upon.

         I hope you understand my position and will
 convey my regrets to your colleagues at the Leningrad
 Computer Research Center.

                              Yours Sincerely,

                                         Judea Pearl
                           Professor, Computer Science Dpt.
                                  University of California
                                        Los Angeles


A note to the reader:
        The 1984 report of the ACM Committee on Scientific
Freedom and Human Rights is available from my office. It is
scheduled for publication in the January-85 issue of the
Communications of the ACM.

        If you  meet with Professor Alexandrov, or other
Soviet visitors, you may find it appropriate to
express your sensitivity to two allegations made
in the ACM report:

1. That Soviet scientists are dismissed from their jobs
   (or demoted) once they apply for exit visas.

2. That these scientists are prevented from attending
   professional meetings (even in the privacy of their homes)
   or from submitting papers to international meetings,
   e.g., IJCAI-83.

       If you kindly send me a summary of Professor
Alexandrov's replies, especially regarding the practices at
his own Institute, I will be glad to bring them to the
attention of the ACM Committee.
                                      J.Pearl
                                <judea@ucla-locus.arpa>

------------------------------

Date: Thu, 13 Dec 84 16:30:45 est
From: Rod Johnson <johnson@nrl-css>
Subject: Lab Description - NRL (Computer Science & Systems Branch)

                       [Edited by Laws@SRI-AI.]



                       NAVAL RESEARCH LABORATORY
                  Computer Science and Systems Branch


The Computer Science and Systems Branch of NRL is active in:

  >> software engineering  >> computer security  >> information theory
  >> search theory         >> expert systems     >> message processing
  >> software measurement  >> speech and signal processing
  >> formal software specifications.

Our interests also include performance modeling and evaluation, human-
computer interfaces, and program specification and verification tools.

    OUR GROUP is small, close-knit, and informal, with a research staff
of 22 members; 9 hold PhDs.  Attendance at conferences and publication
in the open literature are encouraged.  There are ample opportunities
for educational support toward graduate degrees.  Several branch
members also teach at local universities.

    COMPUTING RESOURCES at NRL are being expanded to include a Cray
X-MP/12 system.  This unique system will include a front end consisting
of a cluster of VAX 11/785s with connections to the ARPANET and to a
broadband network linking other NRL computers.  The Branch maintains
VAX 11/780, Sun, and VAX 11/750 machines running UNIX and VMS, and a
Symbolics Lisp Computer.  Each office includes a terminal with a
high-speed link to these systems, which are also linked to the ARPANET.

    THE NAVAL RESEARCH LABORATORY is a government laboratory located on
a 129-acre campus on the banks of the Potomac River in Washington,
D.C.  It was founded at the suggestion of Thomas Edison more than 60
years ago and carries out a wide variety of basic and applied
research.  The Washington area offers a temperate climate and an
outstanding cultural environment, including the museums of the
Smithsonian Institution, the Kennedy Center for the Performing Arts,
and several excellent professional and collegiate theatre groups.

    For more information, contact:

    Mr. S. H. Wilson
    Head, Computer Science and Systems Branch
    Code 7590                      Phone:  (202) 767-2518
    Naval Research Laboratory      Arpanet:  Wilson@NRL-CSS
    Washington, D.C.  20375        uucp:  ...!decvax!nrl-css!wilson

------------------------------

Date: Mon, 10 Dec 84 16:07:59 est
From: ukma!marek@ANL-MCS.ARPA (Wiktor Marek)
Subject: Workshop - Logic and Computer Science

                    FIRST COMMUNICATION

                        Workshop on

                 LOGIC AND COMPUTER SCIENCE

               Lexington,KY, June 9-14 1984.

     In the first half of June 1985 a workshop on Logic  and
Computer Science will take place in Lexington, Kentucky.

     The workshop will take 4 and 1/2 working days.

     The workshop will cover those parts of Computer Science
where  an  active part is played by logic-inclined research-
ers, in particular:

                   Theory of Computation
                    Theory of Databases
                  Artificial Intelligence
        Theory of Operating Systems (Temporal Logic)
                    Program Verification
                     Logic Programming

All the inqueries should be sent to:

                 Logic and Computer Science
               Department of Computer Science
                   University of Kentucky
                 Lexington, KY, 40506-0027
                       (606) 257-3961
or:


    Logic and Computer Science
    ARPA:  "ukma!logic-and-cs"@ANL-MCS   (Note the quote marks.)
    UUCP->  unmvax -----------\
    UUCP->  research ----------\←←←← !anlams --\
    UUCP->  boulder -----------/                >-!ukma!logic-and-cs
    UUCP->  decvax!ucbvax ----/                /
                       cbosgd!hasmed!qusavx --/


                 Organizational Committee:
          Forbes Lewis  Wiktor Marek  Anil Nerode

Lexington, December 1984

------------------------------

End of AIList Digest
********************

∂21-Dec-84  1303	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #180    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Dec 84  13:03:37 PST
Date: Fri 21 Dec 1984 10:18-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #180
To: AIList@SRI-AI.ARPA


AIList Digest            Friday, 21 Dec 1984      Volume 2 : Issue 180

Today's Topics:
  Humor - Jokes & Limericks & Linguistics & D/B Theory & Lardware &
    Computer Museum Traveling Exhibit
----------------------------------------------------------------------

Date: Thu 13 Dec 84 09:21:09-EST
From: Bob Hall <RJH%MIT-OZ@MIT-MC.ARPA>
Subject: AI Jokes

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                      Announcing the only annual

                         AI Joke Contest

Come up with a good cocktail-party-worthy joke about some aspect of
AI and win a U.C., Berkeley T-shirt!  Enter as many times as you like.
Winner (exactly one) will be judged solely on the number of ``HA''s
evoked from the impartial panel of judges.  Ties will be broken by
earliest postmark and contest ends after a sufficiently long time with
no entries.

To be eligible for a prize, you must include your address and t-shirt size.
Entries become property of the judges.

To Enter:

Mail via US Mail your entry in any legible format to

                       AI Jokes
                       1717 Allston Way
                       Berkeley, CA  94703

Please do not send any entries to me, as I am just posting this.  I can,
however, answer limited questions on this, like "Is it legit?" (Yes.)

Enter Now!

------------------------------

Date: Tue 18 Dec 84 16:38:26-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Call for Computer Science Limericks--ABACUS

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The journal Abacus will pay $25 for each original limerick related to computing
that is accepted and published.  Send entries to Mr. Eric A. Weiss, Box 222,
Springfield, PA 19064.  Submissions should be better than the following
samples:

Said a recent B.S. in E.E.
"Three things are important to me:
 How much do you pay?
 Must I work every day?
And the proof of correctness of C."

A professor (whose last name is Wirth),
After seeing Pascal through its birth,
  Said, "It's better than Snobol,
  More structured than Cobol,
And soon will take over the earth!"

This is strictly a public service announcement for those students who want to
make some extra money.  I will make no comments about the above examples nor
my personal view of limericks in general.

Harry Llull

------------------------------

Date: 11 Dec 84 19:29:24 GMT
From: sms@eisx.UUCP (Samuel Saal)
Subject: Oxymorons, Pleonasms and various forms of Bull

              [Forwarded from net.jokes by SASW@MIT-MC.]


       From "More on Oxymorons, Foolish Wisdom in Words and
       Pictures".

       Oxymoron:   two antithetical words, adj vs. noun.  eg.
                   Living Death (can be extended: They agreed to
                   disagree)

       Pleonasm:   sort of the opposite of an oxymoron, the adj or
                   adv agrees with the noun. eg. Wet Water  (what
                   else could water be?) Tautology: a pleonasm
                   whose terms are joined by a copula.  eg. At the
                   center is the middle.

       Bull:       the linguistic name for such linguistic pearls
                   of logic, enabling one to label examples of:

                      - self-contradiction (To be ignorant of one's
                        ignorance is the malady of the ignorant).

                      - self reference (Brain: an apparatus with
                        which we think that we think).

                      - the obvious (Who died? I'm not sure, but I
                        think it's the one in the hearse).

                      - "read the sentence twice and be amazed that
                        it was written" (The sudden rise in
                        temperature was responsible for the
                        intolerable heat) (Nobody goes to that
                        restaurant anymore, it's too crowded)



       Sort the following according to the above rules:

         1.  The best cure for insomnia is to get a  lot of sleep.
             (W.C.Fields)

         2.  You will always find something in the last place you
             look.

         3.  He hadn't a single redeeming vice. (Oscar Wilde)

         4.  Nothing succeeds like success. (Alexandre Dumas)

         5.  New Innovation.

         6.  In these matters the only certainty is that there is
             nothing certain. (Pliny the Elder)

         7.  For those who like this sort of thing, this is the
             sort of thing they like.  (Abraham Lincoln)

         8.  Anyone who goes to a psychiatrist ought to have his
             head examined. (Samuel Goldwyn)

         9.  To visually see.

        10.  Bachelors' wives and old maids' children are always
             perfect. (Nicolas Chamfort)

        11.  One effect of the better lighting is the improved
             visibility.

        12.  He lived his life to the end.

        13.  I have made mistakes but I have never made the mistake
             of claiming that I have never made one. (James Gordon
             Bennett)

        14.  She's genuinely bogus.

       HINT: There are 2 examples of each category (not counting
       "Bull" but rather the subdivisions mentioned)

       "Words are but a window on the word...."

       Sam Saal
       ...ihnp4!eisx!sms

------------------------------

Date: 08 Dec 84 20:27 CDT
From: Maxwell←L%VANDERBILT.MAILNET@MIT-MULTICS.ARPA
Reply-to: Maxwell←L%VANDERBILT.MAILNET@MIT-MULTICS.ARPA,
Subject: Language Deficiencies

There is a legend of a remote tribe of Indians in the Peruvian
Andes, the language of which has no word for "No."  Should a
member of this tribe wish to communicate a negative response,
he will nod his head and say "I'll get back to ya."    :-)

------------------------------

Date: 17 December 1984 2105-EST
From: Jeff Shrager@CMU-CS-A
Subject: Discipline&Bondage Theory

      [Forwarded by Laws@SRI-AI from a file typed from hardcopy
      and made available by Jeff Shrager@CMU-CS-A.  The original
      author is James A. Matisoff of Berkeley.]


                Announcing a new theory of language:

                   DISCIPLINE AND BONDAGE THEORY

                        Ffositam A. Semaj
                             Yelekreb

                        February 23, 1984

       [APPLICATION TO THE GROAN FOUNDATION, WASHINGTON, D.C.]

        It has become increasingly clear that the current linguistic theories
are inadequate to explain much of anything about language.

        Yesterday, however, I conceived a new theory of language, which is,
finally, the correct one.  Already I have found the solutions to virtually
all linguistic problems.  A few details remain to be worked out, but this can
certainly be accomplished during the grant period.

        Despite its explanatory and predictive power, my theory rest on a
very few simple ideas.

        (1) The chief organizing principle of language is CONTROL.

That is to say, certain words should boss others around.  This idea is
perhaps not entirely new, but my theory is the first to carry it one step
further, to the meta-theoretical level:

        (2) The linguist must control language, not vice versa.

        At no time must the theoretician allow himself to be hog-tied by mere
data.  Too much unmotivated detail clogs the mind, and can led to "control
slippage."  Endless time can be wasted on brute undisciplined facts.  That
leas to our third axiom:

        (3) The most highly valued theory is based on the most limited
            and carefully selected data, preferably data gained from
            solitary introspection by the linguist himself.

(In difficult cases, however, it is not methodologically unsound to seek
confirmation of one's grammaticality judgments from other linguists,
provided they are working within the same theory.  It is for this reason that
I have included within this proposal a request for funds for consultation in
D/B Theory at other institutions.)

        D/B Theory is correct precisely because it succeeds in *controlling*
and *dominating* language.  The unique terminology required by our theory
reflects this orientation.  (See below, GLOSSARY OF TECHNICAL TERMS.)

        D/B Theory relates in the most efficient way imaginable to its
data base.  I have, in fact, succeeded in formulating a single sentence that
is so rich in theoretical implications, that once it is properly
disciplined-and-bound it will serve all by itself as the corpus of data for
the whole theory.  Here it is:

        (4) Helmut asked her if Fatima could say wow what a nice day
            to them sorta only if the beige one circumcised her with
            a knout.

It need hardly be emphasized that my theory also applies to other languages
than English, indeed universally to the class of all possible languages.
Firm plans are in place to have (4) translated into French during the next
(1985-86) grant period.

        On a more mundane level, note that D/B Theory uses much better names
in its example sentences than any other theory.   While some theories use
anodyne names like John and Mary, and others offer unmotivatedly cutesy-poo
ones (e.g., Mortimer, Seymour, Snurdley), D/B Theory goes in exclusively for
names like Butch, Helmut and Fatima, thereby enhancing its predictive power
in pragmatic situations where discipline and control are at issue.

        Notice also that (4) could never have been arrived at by the
"butterfly-collector" method of recording natural utterances.  Fortunately,
D/B Theory enabled me to predict that the odds of (4) occurring in a natural
conversation would be quite low.  If I had waited around to hear this
sentence uttered spontaneously I could never have formulated my theory so
rapidly, and would probably have missed the application deadline.

        D/B Theory enables us to account in a principled way for the
otherwise puzzling fact that (4) is fully grammatical, while (5), (6), and
(7) are totally unacceptable:

        (5) * Helmut sorta circumcised her with a knout.

        (6) * Wow what a nice day sorta.

        (7) * Helmut could say beige.

Even previous theories of language recognize that (7) violates a felicity
condition whereby the features [+male] and [+beige] are mutually exclusive.
If the feature specification for "Helmut" does indeed include [+male], these
theories would predict, quite correctly in this case, that (7) is
infelicitous.  Only D/B Theory, however, explains why the acceptability of (7)
increases when it is disciplined by a strappadoed clause, as in (4).

        Space constraints preclude our going into further detail here, and in
any event this discussion must necessarily appear somewhat abstract before
the special terminology required by D/B theory has been mastered.  As a
warning to the reader, the following Glossary of Technical Terms has been
provided.

        Learn them, and learn them now!

                GLOSSARY OF TECHNICAL TERMS

CAT-O'-NINE-TAIL-MENT.

        A clause which is reluctant to fit into our framework may be whipped
into shape by this operation, according to which any nine constituents may be
entailed by any nine others.  Thus (8) may be cat-o'nine-tailed into (9):

        (8) Fatima sucked the sherbert through a straw while her
            Shiite eunuch guards leafed through a stack of girlie
            magazines without much interest.

        (9) The Queen of England opened Parliament with a knout.

As always, however, rigorous disciplinary techniques like this should
not be resorted to prematurely.  It is usually advisable to try FROTTAGE
first, in order to relax the clause and throw it off its guard.

CLAUSE-ABUSE.

        A cover-term for several more specific operations described below.
Occasionally a deeply embedded clause may be forced into self-abuse to avoid
subjugation or subincision at the hands of a clause that ranks higher on the
BOUNDEDNESS HIERARCHY.

CLAUSE-CASTRATION.

        A clause is said to have undergone castration when certain members
have been removed in order to allow a rule to work more insightfully.  Thus
(11) may be generated from (10) by this operation, which is actually
justified on independent grounds anyway, so that no special ad hoc rules need
be added to the grammar:

        (10) What's all this ballyhoo about that balloon that was
             embellished by the ballistic missile?

        (11) What's all this yhoo about that oon that was embellished
             by the istic missile?

Note that our theory correctly predicts that "embellishment" does not satisfy
the conditions for the operation of this rule, despite its surface similarity
to the castratable constituents.  "Embellished" therefore survives
(temporarily) to undergo other sorts of clause-abuse that occur later in the
grammar.

CLAUSE-CRUCIFIXTION.

        A crucified clause is one which has been generated by entailment.
The head of the clause remains free to move slightly, but the rest is bound
tightly to the tree.  Ex-cruciated constituents are usually found to be much
more amenable to persuasion than before the operation applied.

CLAUSE-FROTTAGE.

        An important preliminary discourse strategy that opens clauses up
for further discipline.  Unlike its extreme form, KEELHAULING, which can
involve scraping the clause up one side and down the other, FROTTAGE requires
only a light movement from left to right and back again on the nodule which
is F-commanded by the subjugating member.

CLAUSE-STRAPPADO.

        The weakest NP's hands are tied behind its back and attached to a
pulley by means of which it is pulled out from under the VP that had been
disciplining it and raised to the next higher clause, after which it is
suddenly dropped halfway back down with a jerk.  Thus (12) may be strappadoed
into (13).

        (12) Butch said fuck you or I'll take away your teddybear
             with a knife.

        (13) Butch said fuck you, teddybear, or with a knife I'll take
             yours away, jerk.

Note that jerk-insertion must be ordered with respect to frottage, to avoid
generating such ungrammatical strings as:

        (14) *Butch said fickledy-fuckledy you, teddldy-bearidy, jerk.

PROCRUSTEAN PRUNING.

        A powerful process whereby unwanted constituents are lopped off
either from the beginning or the end of a clause, or both.  This is related
to Pham Phuc Dong's 'constituent gerrymandering', though it is much more
rigorously applied within the D/B framework.  Thus (16) may be derived from
(15) by "equi-PP":

        (15) The chomeur had no place to go during the earthquake, so he
             sat down by default, the chomeur had no place to go during the.

        (16) Earthquake, so he sat down by default.

The questionable grammaticality of (16) is accounted for by the fact that
neither the pre-pruned nor the post-pruned constituents were willing to cross
the picket line.

PROTO-HYPE THEORY.

        Proto-hype theory is an important adjunct to D/B analysis.  Generally
speaking, it enables us to recognize whether a token is behaving
satisfactorily as a member of its type.  (If a constituent is lacking in
discipline, we have ways to make it talk.)  The following data are from
French:

        (17) *Mordxe hot zix nebex aroysgeshnitn di kishkes mit a
              tsibele-kuxn.

             (Mortimer ripped out his guts on a buzz-saw, poor guy.)

Proto-hype theory enables us to predict that "tsibele-kuxn", literally:
"onion-roll", is nowhere near being a prototypical cutting instrument (though
sometimes in particular pragmatic situations poppy seeds may be rather
sharp).  We thus reject (17) as ungrammatical.

TOUGH B-MOVEMENT.

        Applies when a clause has become constipated through lack of roughage.
This is one of the more severe operations permitted by our theory, and should
only be used after milder processes like frottage and proto-hyping have
failed to dislodge the construction.  Consider the following:

        (18) *To do it squeezing over a pit full of viper without
              bran muffins or prune juice is tough duty.

This is clearly ungrammatical and infelicitous as it stands, though, as my
theory predicts, a perfectly good reading is obtained if tough b-movement is
not allowed to apply until after the sentence has been sphincter-bound, as in
(19):

        (19) It is tough duty to do it without squeezing bran muffins
             or prune juice over a pit full of vipers.

The 3-way ambiguity of this sentence is likewise predicted by the theory.

                                ***

All previous linguistic theories have been thinly disguised notational
variants of the flabbily sentimental "philology" of the past.  With
Discipline and Bondage Theory, we serve notice on language that it is to be
coddled no longer.  Broad new vistas of control have opened up.  Let 1984 be
the year that we get back at language once and for all.

                                        MAJ, Principle Investigator.

------------------------------

Date: Thu, 13 Dec 84 14:06:32 cst
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: Architecture for Malogrithms


Kathy Daley, one of our graduate students, suggests the following:

Since the "hardware" will be running "underneath" the malgorithm,
why not call it "UNDERWARE"?????

------------------------------

Date: Thu, 20 Dec 84 07:09:02 pst
From: Paul A. Ehrler <ehrler%cod@Nosc>
Subject: Lardware

    My nomination  for  Lardware of the month goes to IBM. I recall seeing a
    reference to an attempt of theirs to build  a  computer  without an ALU.
    The trick was to do everything with table  look  up, even arithmetic.  I
    guess  they reasoned that first graders are pretty good at that sort  of
    thing, so why not automate it.  It worked  to  some extent, but needless
    to say was not an overwhelming commercial success.

------------------------------

Date: Thu, 13 Dec 84 14:41:50 est
From: Walter Hamscher <walter at mit-htvax>
Subject: Computer Museum Traveling Exhibit

      [Forwarded from the MIT bboard by SASW@MIT-MC.]


        NOON, FRIDAY, IN THE 8TH FLOOR PLAYROOM

               THE BOSTON COMPUTER MUSEUM
                  In conjunction with
              THE REVOLTING SEMINAR SERIES
Presents a traveling exhibit especially for Graduate Students

            COMPUTER POWER AND HUMAN FASHION

                       Featuring

               THE VON NEUMANN TURTLENECK
                          Plus
           NILS NILSSON'S ALPHA-BETA CUTOFFS

   Also featuring a rare Huffman-clothes encoating and
    a dress once worn by Herb Simon's wandering Aunt.

         Hosts: Bonnie Dorr and Dave Braunegg.

------------------------------

End of AIList Digest
********************

∂21-Dec-84  1814	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #181    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Dec 84  18:14:01 PST
Date: Fri 21 Dec 1984 14:27-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #181
To: AIList@SRI-AI.ARPA


AIList Digest           Saturday, 22 Dec 1984     Volume 2 : Issue 181

Today's Topics:
  Math - Fermat's Last Theorem,
  AI Tools - XLISP Interpreter & PROLOG & Expert System Tools,
  Reports - SEAI Survey & Winograd on Semantics & Barwise on Logic,
  Opinion - Skeptical Viewpoints,
  Seminar - REVE: Solving Problems in Equational Theories  (CSLI),
  Course - Reasoning About Knowledge  (SU)
----------------------------------------------------------------------

Date: 19 December 1984 1724-EST
From: Oswald Wyler@CMU-CS-A
Subject: Fermat's last Theorem

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

The first two sentences of an AMS abstract, 816-11-188, by Chen Wenjen,
read: The missing proof of Fermat's Last Theorem has been rediscovered.
The proof is elementary, zigzag, and truly wonderful as claimed by
Fermat nearly three and a half centuries ago.
Anyone know more about this?

------------------------------

Date: 19 Dec 1984 2001 PST
From: Larry Carroll <LARRY@JPL-VLSI.ARPA>
Reply-to: LARRY@JPL-VLSI.ARPA
Subject: Xlisp interpreter

Some time back David Betz announced he'd placed into the public domain
a Lisp interpreter with object-oriented extensions.  Where is it stored
in FTPable form?  Thanks.
                                        Larry @ jpl-vlsi

------------------------------

Date: Thu, 20 Dec 84 00:06 MST
From: May%pco@CISL-SERVICE-MULTICS.ARPA
Subject: Re Issue 179, "micro-PROLOG info request"

Dr.  George Luger, at the University of New Mexico, is developing a
Prolog that runs on PC-compatibles.  It is currently in beta-test.  (no
phone # available)

Also, the University of York, Heslington, York, YO1 5DD, England, has a
C&M Prolog that is written in standard Pascal.  It requires three
file-system-specific procedures to be written for the host, which is
usually a minor job.  The original version compiled cleanly under
Turbo-Pascal but we haven't yet checked it out for correct execution.
The same source compiled and executed cleanly on a mainframe host.
Contact Mrs.  Jenny Turner, Secretary, Software Technology Centre,
telephone 0904 59861, or at the above address.  (A few months ago, they
were charging 200 Pounds.)

------------------------------

Date: Thu, 20 Dec 84 15:20:44 pst
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Prolog on Micros.

There is an article in the December 1984 issue of Byte on
`micro-Prolog', which runs on CP/M and MS-DOS machines
(including the IBM PC).  It is distributed in the United
States by Programming Logic Systems, 31 Crescent Drive,
Milford, Connecticutt 06460, 203 877 7988.
                                            -- Harry

------------------------------

Date: Thu, 20 Dec 84 07:07:46 pst
From: Paul A. Ehrler <ehrler%cod@Nosc>
Subject: Expert System Tools

    Are  there  any  head-to-head   comparisons   of  the  so-called  'fifth
    generation'  expert  system  building tools like KEE, ART, S1, SRL,  and
    LOOPS? I've heard that ART has been improved since the AAAI conference.
    The demo I saw then was  not very informative, since they didn't have an
    extra Symbolics to put  in their hotel suite for serious shoppers; I was
    more favorably impressed by  KEE  at the time.  As for the others, first
    impressions  are  that  S1  was out of date, SRL was underdeveloped  and
    overpriced  ($70K),  and  LOOPS  was  unsupported,  but  had   lots   of
    potential.    Anything   more  concrete  (performance,  ease   of   use,
    robustness,  support provided, etc) would be welcome, especially  direct
    comparisons.   If  I  missed  any  of  importance  (not  of  the  EMYCIN
    generation, please), that would also be useful to know.

    Speaking of prices, are they serious  about  the  exorbitant  prices for
    secondary copies of the software?  I can understand, given the tradition
    of whatever the market  will  bear, that something extra must be charged
    for more  application,  but we have a LAN of five 1108's all on the same
    project, and I can't see charging more for the secondary copies than the
    machines cost  -  that's  a  big  reason  we're  using LOOPS now.  Maybe
    they're thinking like the micro houses,  assuming  that  since  most  of
    their customers are going  to  cheat,  they'll use the honest suckers to
    subsidize.

------------------------------

Date: Thursday, 20 December 1984 01:26:44 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: SEAI Publications

Another report by SEAI  titled "Artificial Intelligence: A New Tool for
Industry and Business" discusses a number of products in the market.  The
utility of this book, which costs $485, is summarized by Price (see SIGART
Newsletter, Oct. 1984) as "it is expensive but it would cost more to
assemble the same information. It is not directed towards researchers but
managers who want to determine how AI can be effectively used in their
business". I wonder if there is a significant difference in content
between this one and the ones mentioned by Ken Laws!

Sriram

------------------------------

Date: Wed 19 Dec 84 18:32:28-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Reports - Winograd & Barwise

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                         CSLI REPORTS

``Moving the Semantic Fulcrum'' by Terry Winograd (Report No. CSLI--84-17)
has just been published. Report No. CSLI--84-2, ``The Situation in
Logic--I'' by Jon Barwise, which has been out of print, is now available.
To obtain a copy of these reports write to Dikran Karagueuzian, CSLI,
Ventura Hall, Stanford 94305 or send net mail to Dikran at SU-CSLI.

------------------------------

Date: 18 Dec 84 13:03:55 CST (Tue)
From: ihnp4!utzoo!henry@Berkeley
Subject: Re: Personal Assistants -- a skeptical viewpoint

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

> Dear sir--oh, my very dear sir.  Is NOTHING going to cheer you
> up?  Can the micro revolution do nothing to help you?

Nope, I'd rather be grumpy and play Devil's Advocate.  Bah.  Humbug.
(Who is that odd fellow with the chains coming through my wall...?)

> For me, I keep remembering what a joy Electric Pencil was after
> typing millions of words on a Selectric; and while nothing that
> has come after Pencil has been the quantum step up that Pencil
> was in 1977, there has been steady improvement.  Computers make
> my life simpler.  (Well, actually more complex; but I get more
> done, and spend more of  my time doing that which I LIKE
> doing...)

I have similar memories of encountering computerized text editing for
the first time, back in 1972.  I've never written anything substantial
on a typewriter since, and have no wish to.  I do appreciate the vast
improvement computers have brought, and the continuing improvements in
the situation.

What I do dislike is sales hype, or the equivalent, which claims that
innovation X is going to bring about Nirvana here on Earth in just a
few years.  I.e., Real Soon Now.  (Yes, I read and enjoy your column
in Byte.)  In particular, the next time somebody tells me that applied
AI and/or the Fifth Generation is going to solve all my problems, I
think I'm gonna throw up.  The AI folks are notorious for exuberant
promises followed by failure and disillusionment.  I would have
thought they, of all people, would be a bit more cautious about
predicting the Millenium yet again.  Nope, same old snake oil...

What I should have made clearer, in my earlier note, was that I do
expect some very interesting by-products from the inevitable failures.
I have no quarrel with anyone who merely predicts significant advances
and the appearance of useful new tools.  This cloud is indeed likely
to have a silver lining, even though it's not going to be solid
platinum as its proponents claim.

                           Henry Spencer @ U of Toronto Zoology
                            {allegra,ihnp4,linus,decvax}!utzoo!henry

------------------------------

Date: 20 December 1984 00:46-EST
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: Personal Assistants -- a skeptical viewpoint

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

Ah well, I suppose I must agree regarding the hype.
As to AI: there is a famous story.

John McCarthy some years ago is said to have bought a Heathkit
television for the Stanford AI lab.  When it arrived a student
eagerly fell upon it, but was restrained.
        "We will construct a robot to build the kit," McCarthy
is said to have said.
        Last I heard the box was unopened.

        The story is probably apocryphal, but I do recall
the Great Foreign Language Translation Revolution predicted in
the 60's...

------------------------------

Date: Wed 19 Dec 84 18:32:28-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - REVE: Solving Problems in Equational Theories 
         (CSLI)

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                SUMMARY OF NOVEMBER 21 AREA C MEETING

Topic:     REVE: A system for solving problems in equational theories,
              based on term rewriting techniques
Speaker:   Jean-Pierre Jouannaud, Professor at University of NANCY, FRANCE,
              on leave at SRI-International and CSLI.

Equational Logic has been adopted by mathematicians for a very long
time and by computer scientists recently.  Specifications in OBJ2, an
``object-oriented'' language designed and implemented at
SRI-International, uses equations to express relations between
objects.  To express computations in this logic, equations are used
one way, e.g. as rules.  To make proofs with rules in this logic
requires the so-called ``confluence'' property, which expresses that
the result of a computation is unique, no matter the order the rules
are applied.  Proofs and computations are therefore integrated in a
very simple framework.  When a set of rules does not have the
confluence property, it is augmented by new rules, using the so-called
Knuth and Bendix completion algorithm, until the property becomes
satisfied.  This algorithm requires the set of rules to have the
termination property, i.e., an expression cannot be rewritten forever.
It has been proved that this algorithm allows one to perform as
inductive proof without invoking explicitly an induction principle and
to solve equations (unification) in the corresponding equational
theory as well.

------------------------------

Date: Fri, 14 Dec 84 16:15:12 PST
From: Joe Halpern <halpern%ibm-sj.csnet@csnet-relay.ARPA>
Subject: Course on reasoning about knowledge

I'll be teaching a course on reasoning about knowledge at Stanford
in the winter quarter, along much the same lines as [my IBM-SJ] seminar.
He are the details:

Reasoning About Knowledge (CS400B)
Knowledge seems to play a crucial role in such diverse areas as
distributed systems, cryptography, and artificial intelligence.
We will examine various attempts at formalizing reasoning about
knowledge, and see to what extent they are applicable to the areas
mentioned above.  In particular we will consider such problems as
resource-bounded reasoning, inconsistency of beliefs, belief revision,
and knowledge representation.  There is no text from the course; we
will be concentrating on current research.

Officially the course meets on Tuesdays in the winter quarter,
from 2:45-5:00.  I would be willing to consider moving that time
to another afternoon (although I suspect it might be hard to
reach agreement).  It might be nice to push the meeting time forward
to 1:30-3:45, so those interested can attend the CS Colloquium.
I've enclosed a brief (tentative!) outline for the course.  As of now,
the emphasis is on material I'm most familiar with (i.e., papers
I've written), but I would be interested in hearing suggestions
from participants in the course on other material to cover.
Auditors are welcome.

Week 1 and 2:  Philosophical background and thorough introduction to
               possible-worlds semantics for knowledge.
  References:  W. Lenzen, Recent work in epistemic logic, Acta
               Philosophica Fennica, 1978.
               J.Y. Halpern and Y.O. Moses, A guide to the modal logics
               of knowledge and belief, to appear as an IBM RJ, 1985.
Week 3:        The "knowledge structures" approach
  References:  R. Fagin, J.Y. Halpern, and M.Y. Vardi, A
               model-theoretic analysis of knowledge, in "Proceedings
               of the 25th Annual Conference of Foundations of
               Computer Science", 1984, pp. 268-278
Week 4:        Knowledge in distibuted systems
  References:  J.Y. Halpern and Y.O. Moses, Knowledge and common
               knowledge in a distributed environment, in "Proceedings
               of the 3rd ACM Conference on Principles of Distributed
               Computing", 1984; IBM RJ 4421, 1984.
               R. Strong and D. Dolev, Byzantine agreement, IBM RJ 3714,
               1982.
Weeks 5 and 6: Resource-bounded and incomplete knowledge, relevance
               logic, the "syntactic approach"
  References:  H.J. Levesque, A logic of implicit and explicit belief,
               Proceedings of the National Conference on Artificial
               Intelligence, 1984, pp. 198-202.
               K. Konolige, A deduction model of belief, Ph.D. Thesis,
               Stanford University, 1984.
               R. Fagin and J.Y. Halpern, Knowledge and awareness,
               unpublished manuscript, 1985.
               S. Shapiro and M. Wand, The relevance of relevance,
               Indiana University Technical Report No. 46, 1976.
Weeks 7 and 8: Belief revision and non-monotonic reasoning
  References:  D. McDermott and J. Doyle, Non-monotonic logic I,
               Artificial Intelligence 13, Vol. 1,2, 1980, pp. 41-72.
               R. Reiter, A logic for default reasoning,
               Artificial Intelligence 13, Vol. 1,2, 1980, pp. 81-132.
               J. McCarthy, Circumscription - a form of non-monotonic
               reasoning,  Artificial Intelligence 13, Vol. 1,2, 1980,
               pp. 27-39.
               W.R. Stark, A logic of knowledge, Zeitschrift fur
               Mathematische Logik und Grundalagen der Mathematik 27,
               pp. 371-374, 1981.
               D. McDermott, Non-monotonic logic II: non-monotonic modal
               theories, Journal of the ACM, Vol. 29, No. 1, 1982,
               pp. 35-57
               R.C. Moore, Semantical considerations on non-monotonic
               logic, SRI Technical Note 284, 1983.
               H.J. Levesque, A formal treatment of incomplete knowledge
               bases, Fairchild Technical Report No. 614, FLAIR Technical
               Report No. 3, 1982.
               K. Konolige, Circumscriptive ignorance, Proceedings of
               the National Conference on Artificial Intelligence, 1982,
               pp. 202-204.
               J.Y. Halpern and Y.O. Moses, Towards a theory of knowledge
               and ignorance, Proceedings of Workshop on Non-monotonic
               Reasoning, 1984; IBM RJ 4448, 1984.
               R. Parikh, Monotonic and non-monotonic logics of
               knowledge, unpublished manuscript, 1984.
Week 9:        Knowledge bases
  References:  H.J. Levesque, A formal treatment of incomplete knowledge
               bases, Fairchild Technical Report No. 614, FLAIR Technical
               Report No. 3, 1982.
               K. Konolige, A deduction model of belief, Ph.D. Thesis,
               Stanford University, 1984.
Week 10:       Knowledge and cryptography; puzzles
  References:  M.J. Merritt, Cryptographic protocols, Ph.D. Thesis,
               Georgia Institute of Technology, 1983.
               S. Goldwasser, S. Micali and C. Rackoff, Knowledge
               complexity, unpublished manuscript, 1984.
               X. Ma and W. Guo, W-JS: a modal logic about knowing,
               Proceedings of the 8th International Joint Conference
               on Artificial Intelligence, 1983.
               D. Dolev, J.Y. Halpern and Y.O. Moses, Cheating spice
               and other stories, unpublished manuscript, 1984.

------------------------------

End of AIList Digest
********************

∂26-Dec-84  0122	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #182    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Dec 84  01:22:12 PST
Date: Tue 25 Dec 1984 23:39-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #182
To: AIList@SRI-AI.ARPA


AIList Digest           Wednesday, 26 Dec 1984    Volume 2 : Issue 182

Today's Topics:
  AI Tools - Prolog for PCs,
  Linguistics - Oxymorons,
  Humor - Malgorithm Contest,
  Bindings - Navy Center for Applied Research in AI,
  News - Recent Articles,
  Opinion - Personal Assistants,
  Workstations - Very Inexpensive LISP Machine,
  Courses - Intelligent Tutoring Systems  (SU) &
    Computational Semantics  (SU)
----------------------------------------------------------------------

Date: Sat, 22 Dec 84 21:12 EST
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: Prolog for PC-type machines


Expert Systems Limited has a prolog for PC-type machines that seems
pretty good.  It is Clocksin&Melish compatible.  We've run it with
no problems on both a IBM-PC and a DEC Rainbow, so it will probably
run on any MS-DOS machine.  There is also a CP/M version.  This is the
prolog that Technowledge used to implement M1 in.  The home address for
the company is:

        Expert Systems Limited
        9 West Way
        Oxford OX2 OJB
        England

There is a U.S. affiliate, located in the Philadelphia area, that
has the US rights.  I don't have the address at the moment.

------------------------------

Date: Sat, 22 Dec 84 21:29 EST
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: Oxymorons, Pleonasms and various forms of Bull


Saul Gorn has published a compendium of material related to the recent
note on "Oxymorons, Pleonasms and various forms of Bull" that he has
collected in his 50 year career as a mathematician and computer
scientist.  It is available as "Self-Annihilating Sentences; Saul
Gorn's Compendium of Rarely Used Cliches"; Technical Report
MS-CIS-83-22.  It can be obtained by writing:

        Publications
        Computer and Information Science
        The Moore School
        University of Pennsylvania
        Philadelphia, PA 19104

Tim

------------------------------

Date: Fri, 21 Dec 84 14:53:45 mst
From: jlg@LANL (Jim Giles)
Subject: Contest

It's the first annual Complete the Book Title Contest ( no prizes awarded,
none were donated).

'Malgorithms + Data Scrambling = ←←←←←←←←←←←←←←←←←←←'

First prize (which is worth twice as much as the other prizes) will be
awarded to the person who guesses the author of the above work.

Send answers to jlg@lanl.ARPA and I will summarize.

------------------------------

Date: Wed, 19 Dec 84 10:25:36 est
From: Dennis Perzanowski <dennisp@nrl-aic>
Subject: erratum

Please be advised of the following correction in the address for the
Navy Center for Applied Research in Artificial Intelligence which was
recently broadcast:

     U.S. Navy Center for Applied Research in Artificial Intelligence
     Naval Research Laboratory - Code 7510
     Washington, DC  20375-5000

The address of the Civilian Personnel Office to which all resumes and
inquiries should be sent is correct as printed in the announcement.
Sorry for any inconvenience.  Thank you.

------------------------------

Date: Sun, 23 Dec 84 12:46:37 cst
From: Laurence Leff <leff%smu.csnet@csnet-relay.arpa>
Subject: AI News


The Institute, Volume 9 Number 1 January 1985 Page 10
Experts Envision New Applications for AI Technology on the Shop Floor
Describes work for automatically constructing part programs for milling.
Also discusses applications of AI to such industries as paperboard packaging
Proccedings on an Expert System Session of Autofact 6 are available
from SME, One SME Drive, P. O. Box 930, Dearborn, Mich. 48121 which includes
papers on these subjects.


Electronics Week, October 15 1984 page 14
Discusses various Fifth Generation projects in America and Japan


IEEE Computer November 1984 Volume 17 No. 11

Page 117 Three-paragraph review of the National Conference on Artificial
Intelligence in Austin by Elaine Rich

Page 114 summarizes talk by Robert Miller, senior vice president at Data
General, on "personal expert systems"

Page 65 The Library of Computer and Information Science is again offering
the three volume Handbook of Artificial Intelligence for only $4.95 as
a sign up bonus.


Electronics Week October 29, 1984, page 34
Discusses Quintus Computer Systems Prolog systems and development environments
for Prolog.


Electronics Week September 24, 1984 page 59
Interview with Larry Harris who is president of Artificial Intelligence
Corp., the people behind the Intellect natural language database interface


Communications of the ACM December 1984 Volume 27 Number 12 page 1227
Discusses a solution to the travelling salesman problem with thousands
of nodes.  The solution was used for determing paths in drilling holes in
PC boards.  Uses a cluster-based approach.

------------------------------

Date: Fri 21 Dec 84 20:40:13-EST
From: Wayne McGuire <MDC.WAYNE@MIT-OZ>
Subject: Personal Assistants

     I agree with Henry Spencer that many claims from the AI community
are overblown, and that we need to maintain a healthy stance of
skepticism about the Next Big Revolutionary Breakthroughs that are
forecast every week.  However:

     (1) I don't think the present generation of outliners, natural
language interfaces, and free-form databases, which are some of the
basic building blocks of idea processors, are, as you insist, a "fad."
Products like Thinktank and Intellect are not vaporware: they have
firmly established themselves in the marketplace, and are not going to
disappear.  They are a permanent and welcome fixture in the world of
microcomputer and (in the case of Intellect) mainframe software.

     (2) Mitch Kapor's remarks about AI are not, as you put it, a lot
of "marketing hype." As I understand it, a company has been spun off
from Lotus which is doing serious research in natural language
processing.  That company will probably develop a product somewhat
like Intellect or Clout which will become an essential element in
future integrated software from Lotus.

     (3) A pencil and paper is fine, but I much prefer a Model 100 as
a portable device for recording and shaping notes and ideas.  A Model
100 with significantly greater memory, built-in idea processing
software, and a connecter to an optical disk storage device would, I
suspect, wean many people away from paper and pencils for good.

     (4) Building a powerful idea processor is very much a function of
available memory.  Framework, for instance, would be a much more
effective product if the quality of its word processor and database
management system could be raised to the level of ZyWrite II Plus and
MDBS III.  To acquire that kind of power would require an extra
megabyte or two of memory.

     (5) The privacy issue in regard to optical disks is a red
herring.  The federal government already has easy access to much of
the sensitive information which would be stored on a personal disk.  A
biodisk might give individuals an opportunity to know as much about
themselves as the government does.

-- Wayne McGuire <wayne%mit-oz@mit-mc>

------------------------------

Date: 24 Dec 1984 00:07-EST
From: Todd.Kueny@CMU-CS-G.ARPA
Subject: Very Inexpensive LISP Machine

I have recently been toying with the idea of very inexpensive lisp
machines (VILM).  The ideal VILM would support a hi-res display,
a keyboard, mouse, RS-232/422 interface, floppies (5 1/4 or 3 1/2 inch),
support interpreter, compiler, plus other handy functions (fasl,
debugger, trace, maybe an object language), provide a window package
(mulitple fonts, editor, etc.), be portable (so I can drag it back
and forth to work easily), and be able to support
as some sort of options: virtual memory with a hard disk (10M, 20M, or
whatever is cheap), ethernet, and different size physical memory (512K, 1M,
2M).

As I see it, the technology exists right now to build such a beast (by
"right now" I mean "order it from BYTE magazine").  The hi-res display,
keyboard, mouse, r2-232/422 and floppies are supplied by an Apple MacIntosh
(approximately $1700-2800).  The remaining non-optional stuff would be
supplied (initially) by a box similar in size to the Mac containing an
8 slot Multi-Bus card rack, power supply, fan,
M68010 processor card, ROM card (interpreter, compiler, other handy
stuff), RAM card or cards (512K or more), interface logic to talk to the
Mac (total < $5,000).

The LISP would be Portable Standard Lisp (PSL) which is
cheap, avaiable, and could be loaded into ROMS.  The Mac would
handle the display and filing functions. It would be portable
since the Mac will zip into a bag and so could the additional box.

Total cost would be around $8,500 to build from scratch (the
Imagine IMPRINT laser printers use this concept, so I know it's
workable).

Some tense hacking plus a disk controller card and 10-30M Winchester
could make a single process, virtual memory system possible
for an additional $5,000 (total price ~ $13,500).

Enhancements could include a bit-slice processor board with a real
instruction set, tape cartridge backup, more disk, and a real
operating system with files, multple processes, and ether/arc/apple
net.

My goal is a VILM which is affordable, flexible, and
able to support truly tense lisp hacking in a useful way.  Is there
any such thing out there?  I would like to correspond with anyone
having interest in VILMs (ideas, designs, hardware and software
implementations).

                                                        -Todd K.

------------------------------

Date: Fri 14 Dec 84 23:29:42-PST
From: Derek Sleeman <SLEEMAN@SUMEX-AIM.ARPA>
Subject: Intelligent Tutoring Systems course - Winter Quarter

    [Forwarded from the Stanford bboard by Laws@SRI-AI.]

This course was given for the first time last session; this year the
course will have more of a workshop flavour.


Topic:  Some issues in Intelligent Tutoring Systems (ITSs) CS 324X & Ed. 495X

Instructor:  D Sleeman

Time/Location: Winter Quarter: Wednesday, 4-6 p.m., Room 334 Cubberley
Audience: Graduate Students in Computer Science, Education & Psychology.
Prerequisites: Consent of Instructor required
Number of units: 2-3

The  seminar  will  highlight  research  problems  which  are   encountered   in
implementing automated teaching systems from principally, an AI perspective, and
secondly from Cognitive Science and instructional perspectives.   In  particular
we  will  review the "traditional" CAI systems and the more recent activities in
ITSs within these frameworks and point out the  current  perceived  shortcomings
which include:

        -  inappropriate feedback due to inadequate students models
        -  inadequate conceptualization of the domain
        -  unprincipled tutoring strategies
        -  user interaction with the system is too restricted

The systems which have concentrated on the issue of inferring a  student  model,
namely BUGGY and PIXIE (formerly LMS), will be studied in some depth.  Inferring
a model of a student's problem solving, even in a  restricted  domain,  is  very
complex as given N rules, there are potentially N!  models to be considered.  We
shall discuss how these  modelling  systems  have  addressed  and  "solved"  the
combinatorial  explosion  problem.   We  will  then  consider  how some of these
techniques could be applied to the more general problem of modelling a  user  of
computer system/package.  The class will have access to several mini-versions of
ITSs which have very recently been transferred to the IBM PC -- these include  a
version  of  BUGGY,  PROUST  and  the  instructor's  PIXIE  system.   Indeed the
principal task for the class will be to implement  a  data-base  for  the  PIXIE
system.


The course will conclude with a discussion of open research issues in the area.

Literature:  Principal source will be  Intelligent  Tutoring  Systems,  Academic
Press  l982,  (eds.   Sleeman  and  Brown).  Additional BUGGY and LMS papers and
selected papers from Mental Models Erlbaum, l983, (eds.  Gentner & Stevens).


Queries may be addressed to SLEEMAN@SUMEX, or 497-3257.

D.  Sleeman, 10 December l984

------------------------------

Date: 18 Dec 84  1105 PST
From: Terry Winograd <TW@SU-AI.ARPA>
Subject: Course on Computational Semantics - Ling/CS 276

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Computer Science 276 / Linguistics 276
Computational Models for the Semantics of Natural Language
Winter 1985
Terry Winograd

MWF 10-11, Terman 156 (televized)

In this course we will develop the theoretical basis for the implementation
of computer systems dealing with the meaning of natural language.  We
will cover a variety of semantic and pragmatic areas, developing three
aspects of each:

1) The formal theories relevant to the area, drawn from work in linguistics
   and the philosophy of language

2) Computational issues that arise, and the computational mechanisms that
   have been developed to augment or supplant the standard formal framework

3) Limitations of the formalization and problems in extending it to cover
   the full range of related phenomena.

Areas covered will include lexical meaning, compositionality, quantification
and reference, temporality, speech acts, and schematic structures.

I will describe a number of existing AI systems in light of these
theoretical foundations, but will not attempt to provide a comprehensive
coverage of the currently available systems or to deal in depth with
details of implementation.  The course is intended to serve as a basis for
understanding what is being done and what can be done, not as a practical
"how-to-do-it" course.

There will be three lectures a week, and some homework assignments.  There
will be a mid-term and a final exam.  No computer programming exercises or
project will be required.

There is no regular textbook.  Course notes will be duplicated and made
available, based partly on a textbook I am writing.

The course will assume a background (either prior, or through additional
study during the course) in two areas: formal logic and basic techniques
of artificial intelligence.  Two books are recommended:

  Logic in Linguistics, by Allwood, Andersson and Dahl,is recommended to
  anyone not already well versed in the logical formalisms used in
  semantics, including basic set theory, propositional and predicate logic,
  deduction rules, and rudiments of modal and intensional logic.

  Principles of Artificial Intelligence, by Nils Nillson, is recommended
  as an introduction to basic AI techniques for planning, deduction, and
  representation.

We will not cover most of this material in class, but will provide
tutorial opportunities for those students who need to fill in the
background as we go.  There are no other prerequisities in either
computation or linguistics, except for a general familiarity with concepts
of programming (as gained from any programming course or experience).

------------------------------

End of AIList Digest
********************

∂31-Dec-84  1338	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #183    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 31 Dec 84  13:38:03 PST
Date: Mon 31 Dec 1984 11:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #183
To: AIList@SRI-AI.ARPA


AIList Digest            Monday, 31 Dec 1984      Volume 2 : Issue 183

Today's Topics:
  Projects - Cognitive Science Dictionary,
  AI Tools - Cheap Lisp Machines & Xerox,
  News - Recent Articles & Thinking Machines Corporation & Space Shuttle,
  Courses - Massively Parallel Models of Intelligence  (CMU) &
    Reasoning about the Physical World  (UIUC)
----------------------------------------------------------------------

Date: Sun, 30 Dec 84 21:53:17 est
From: 20568%vax1@cc.delaware (FRAWLEY)
Subject: Cognitive Science Dictionary


I recently spoke with a publisher about the possibility of compiling
a Dictionary of Cognitive Science. I'm sending out this preliminary
inquiry to you all to see what you think of the idea. I'd appreciate
responses to any or all of the following:

1. Is the idea of such a dictionary good, bad, ridiculous...?

2. Is such a dictionary a feasible project?

3. If the project is feasible, what areas of Cognitive Science
ought to be covered?

4. What do you think of the marketability of such a dictionary?

5. If the project is feasible, what form should the dictionary take
(i.e., standard dictionary form, encyclopedic form, etc.)?

You can send your responses via the AIList or to me directly.

Thanks,

Bill Frawley
Linguistics
U. of Delaware

20568.ccvax1@udel

------------------------------

Date: Thu, 27 Dec 84 17:07:33 pst
From: hplabs!sdcrdcf!darrelj@Berkeley (Darrel VanBuer)
Subject: A Very Cheap Lisp Machine

To be slightly partisan toward the machines I
use, Xerox Dandelions can be had for under $19,000 in some configurations.
For not much over the high end of the proposal in V2 #182, you GET the high
end machine (except addition of Ethernet and a display with 6 times the
pixels of the Macintosh).  About a third of the cost of a Dandelion is for
the Interlisp software (inferred from the unbundled Star price list).
This is a reasonable cost given the complexity of a full-blown display-oriented
Lisp environment and the (relatively) small market for Lisp machines.

Darrel J. Van Buer, PhD
System Development Corp.
2500 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,orstcs,sdcsvax,ucla-cs,akgua}
                                                            !sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

------------------------------

Date: 26 Dec 1984 1757 PST
From: Larry Carroll <LARRY@JPL-VLSI.ARPA>
Reply-to: LARRY@JPL-VLSI.ARPA
Subject: Xerox AI

Paul Erler's message reminds me: the latest Computerworld has a full-page
ad with the banner XEROX ANNOUNCES A 15-YEAR HEADSTART IN ARTIFICIAL
INTELLIGENCE.  It seems they're now selling and supporting what they call
the Xerox AI System.  It includes a combination of 1108 or 1132 workstations,
Interlisp D and LOOPS, and training as well as support.  Added info can be
gotten from
                        attn: AI Marketing, MS 1245
                        Xerox Special Information Systems
                        Artificial Intelligence Business Unit
                        250 N. Halstead St., PO Box 7018
                        Pasadena, CA 91109

------------------------------

Date: Sat, 29 Dec 84 06:24:05 cst
From: Laurence Leff <leff%smu.csnet@csnet-relay.arpa>
Subject: Recent AI Articles


New Scientist November 8, 1984 Volume 104 No. 1429 pp 10
Japan unveils its fifth generation


New Scientist November 15, 1984 Volume 104 No. 1430
AI is Stark Naked from the Ankles Up.  [An entertaining article
claiming that the emperor's new clothes (AI) consist only of
sneakers (20-year-old expert systems technology).  -- KIL]

Distributing Computing
APIC studies in Data Processing Volume 20
Edited by F. B. Chambers D. A. Dune G. P. Jones
Academic Press $22.50
The following titles in this compendium might be of interest:
  Using Algebra for Concurrency
  Reasoning about Concurrent Systems
  Functional Programming
  Logic Programming and Prolog

------------------------------

Date: Mon 31 Dec 84 11:40:57-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Thinking Machines Corporation

From the January, 1985, issue of Omni, p. 33, by Edward Rosenfeld:

[...]
The latest fusion of acadame and venture capital is Thinking
Machines Corporation (TMC), a Cambridge, Mass., company that
boasts Marvin Minsky, cofounder of MIT's AI Laboratory and one
of the pioneers of AI, as one of its founders.

A group of investors headed by CBS founder William Paley has
reportedly put up a $10 million stake to get TMC off the ground.
AI insiders refer to the company as the Marv and Marv Show because,
in addition to Marvin Minsky, TMC has also acquired the services
of Marvin Denicoff, who formerly guided the AI programs at the
Office of Naval Research.

The company's first product, currently in prototype development,
will be the connection machine, a parallel-processing supercomputer
designed by W. Daniel Hillis, of MIT.  [...]

                                        -- Ken Laws

------------------------------

Date: Fri, 28 Dec 84 14:04:56 est
From: nikhil@mit-fla (Rishiyur S. Nikhil)
Subject: AI and the Shuttle


Here are some items of interest from Aviation Week and Space Technology:

++++ AWST Sep 17, 1984, page 79

Johnson Space Center (Houston) officials expect to use AI techniques in
future Shuttle missions, beginning late 1984 or in 1985.

The first use will be Navex, a "navigational expert system". Currently, the
navigation console position is manned in 4 shifts, with 3 controllers per
shift. Each person needs 2 years of training to make high-speed decisions
about shuttle velocity and trajectory.
JSC officials expect to man it with one controller per shift in conjunction
with Navex.

Navex is built on ART (Automatic Reasoning Tool), which is written in Lisp.
ART is a product of Inference Corp. of Los Angeles. Navex was developed by
Inference Corp. and LinCom Corp. of Houston.

++++ AWST Dec10, 1984, page 24

NASA will test Navex along with its human counterparts in Jan 1985. A Symbolics
computer will run in a lab near Mission Control at Johnson Space Center,
Houston, and will be wired to the navigator console position. They expect
it to make decisions about Shuttle velocity and trajectory six times faster
than humans.

By March, an AI program will perform Shuttle electrical system checks during
pre-launch ground preparations. The actual program is finished, but
documentation to explain it will take 3 months. (!!)

By late summer 1985, Johnson Space Center wil complete an expert system
that captures the expertise of a person whose job would be to talk the
shuttle down during re-entry if it were to emerge from a radio blackout
with malfunctioning navigation instruments. It will take 2 months to build,
and will run in Mission Control as an advisor to flight controllers.

------------------------------

Date: 22 Dec 1984 1152-EST
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Course - Massively Parallel Models of Intelligence

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

                      Advanced Course on:

           MASSIVELY PARALLEL MODELS OF NATURAL INTELLIGENCE

                  Geoffrey Hinton & Scott Fahlman

This is a 7 week advanced course. It meets from 11.30 - 12.50 on Wednesdays
and Fridays in 5409, starting on Wednesday Jan 16.  A reading list and a brief
description of each lecture will be available from Geoff Hinton on Jan 15th.

The course covers models of @b(search, representation,) and @b(learning) in
networks of simple processing elements that are richly interconnected.  The
emphasis will be on the computational properties of these networks, but we will
also cover the psychological and neurophysiological evidence for and against
various models.

SEARCH
The main search technique used in these networks is iterative relaxation.
Five different models of relaxation will be presented and their performance
will be compared on a variety of tasks including stereo-fusion,
surface-interpolation, shape-recognition, and figure-ground segmentation.
Other search methods will also be covered.

REPRESENTATION
To make efficient use of the representational capacity of massively parallel
networks, it is often necessary to use novel kinds of representation in which
individual processing elements do not have a simple relationship to the
concepts being represented.  We will cover methods of representing continuous
variables, high-dimensional feature spaces, spatial transformations, simple
associations, schemas, trees, production systems, and Clyde.  We will discuss
the interaction between representational efficiency and ease of search for each
kind of representation.

LEARNING
We will cover the history of attempts to make networks that learn by modifying
connection strengths, and show why these attempts generally failed or worked
only for very circumscribed domains.  The difficult problem in learning is to
construct @i(new) representations.  We will compare three different models that
create representations by modifying connection strengths.  We will also compare
these connectionist models with more conventional AI learning methods.

------------------------------

Date: Thu, 27 Dec 84 20:57:01 cst
From: Kenneth Forbus <forbus%uiucdcsp@uiuc.ARPA>
Subject: Course - Reasoning about the Physical World  (UIUC)

Course Announcement - U. of Illinois at Urbana

CS 497, Spring 1985
Title: Reasoning about the Physical World
Instructor: Ken Forbus

This graduate seminar will examine principles and methods developed in
Artificial Intelligence for reasoning about problems involving space, time,
processes, and action.  Topics include:  solving word problems; qualitative
physics; planning actions, experiments, assemblies, and routes; analysis,
design, troubleshooting, and control of engineered systems.  A solid AI
background will be assumed.

Outline:

1. Solving Textbook Physics Problems

        Survey of programs: Charniak's CARPS, Novak, Larkin,
                Bundy, de Kleer.

        Transformation from natural language to equations

        Symbolic algebra

2. Qualitative Physics

        Qualitative State representation: ontology,
                making predictions, correlating qualitative
                results with quantitative results, using
                qualitative reasoning to guide search for
                quantitative solutions.

        Qualitative Process theory: processes as mechanisms of
                change, influences as representation of equations,
                basic deductions sanctioned by QP theory, prediction,
                measurement interpretation.

        Qualitative System Dynamics: breakdown of processes when
                system connectivity becomes high, device-centered
                model for physics.  Confluences as representation of
                equations, constraint-satisfaction and propagation
                techniques for solving confluences.

3. Planning

        "Classical" AI planning: GPS, STRIPS, NOAH, MOLGEN.  Limitations
                due to inadequate models of time, space, and action.

        Modelling time: Histories and Chronicles.  Allen's interval-based
                formulation.  Vere's DEVISER. Theories of action.

        Modelling space: symbolic, metric, and analog representations
                of space.  The "visual routines" model of human spatial
                competence.

        Robot planning (routes): Configuration space approach and related
                computational problems.  Quantizing free space into
                 "freeways".

        Robot planning (assembly):  Symbolic analysis of errors.  Automatic
                insertion of inspection steps into assembly plans.

4. Engineering Problem Solving

        Analysis: Propagation of constraints, EL.  Qualitative
                analysis for functional recognition.

        Design: SYN, the role of causality in circuit design,
                circuit grammars.

        Troubleshooting: Digital electronics: Davis' group and the DART
                 project.  Continuous systems: SOPHIE.

        Control: Temporal logic for synthesizing control strategies.

------------------------------

End of AIList Digest
********************

∂04-Jan-85  2250	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #184    
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Jan 85  22:50:07 PST
Date: Fri  4 Jan 1985 20:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>
Reply-to: AIList@SRI-AI.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #184
To: AIList@SRI-AI.ARPA


AIList Digest            Saturday, 5 Jan 1985     Volume 2 : Issue 184

Today's Topics:
  Symbolic Algebra - Package Request,
  Expert Systems - Smalltalk Application,
  AI Tools - Inexpensive Lisp Machines,
  Mathematics - Fermat's Last Theorem,
  Cognitive Science - Dictionary Project,
  Anecdote - SAIL TV Story,
  Opinion - 5th Generation Research,
  News - Reading Machines,
  Conferences - Upcoming Submission Deadlines,
  Seminars - Representation and Presentation  (CSLI) &
    Rewrite Rules for Functional Programming  (IBM-SJ)
----------------------------------------------------------------------

Date: Wed, 2 Jan 85 08:34 EST
From: D E Stevenson <dsteven%clemson.csnet@csnet-relay.arpa>
Subject: Symbolic Algebra Package Request

I would like to obtain a symbolic algebra package which would run on
a VAX/Franz Lisp configuration.  Preferably, I like one in the public
domain.

D. E. Stevenson,
Department of Computer Science
Clemson University
Clemson, SC 29631
(803) 656-3444

------------------------------

Date: Wed,  2 Jan 85 11:43:21 PST
From: Jan Steinman <jans@mako>
Reply-to: Jan Steinman <jans%mako.uucp@csnet-relay.arpa>
Subject: Smalltalk Expert Systems


    Mike.Rychener@CMU-RI-ISL2:
    Does anyone know of any successful AI applications coded in SmallTalk?
    This was stimulated by the new Tektronix AI machine, whose blurb touts
    its SmallTalk as useful for developing expert systems.

Take a look at the Troubleshooter for the Tektronix 4404.  Although I am not
on the "inside" on this one, It is a rule based system written in Smalltalk.
One of the program's principles (Jim Alexander) is a Cognitive Scientist and
not, strictly speaking, a propgrammer, which attests to the ease with which
such things can be done in Smalltalk.

The Troubleshooter has two graphic and several text windows.  The graphic
windows present a schematic and a parts layout, each having little probes that
move from point to point.  A text window asks questions, such as "Is the
voltage at N19 high?"; the answers of such questions cause the probe(s) to
move to the next test point.  Other text windows can be opened on a parts
database, troubleshooting advice, and the actual rules program, among others.
(Remember, the full power of Smalltalk is always available, which is good and
bad!)  A window can be opened on a scope screen, which shows expected
waveforms at various points.

I have seen it; it works; it's fun!  I fixed stereos, transceivers, and color
TVs before getting into computers and know that half the battle in
troubleshooting is often using the service literature!  This application is
sort of a smart, graphics-based, hypertext service manual and would really be
useful.  It is not simply an interesting bit of AI research!

I AM NOT A PART OF THIS PROJECT.  Although I don't want to seem anti-social,
please contact your nearest Tek field office for a demo and more information;
do not contact me!

:::::: Jan Steinman             Box 1000, MS 61-161     (w)503/685-2843 ::::::
:::::: tektronix!tekecs!jans    Wilsonville, OR 97070   (h)503/657-7703 ::::::

------------------------------

Date: 2 Jan 1985 09:58:48-EST
From: kushnier@NADC
Subject: VILM


Todd,
We at NAVAIRDEVCEN are also interested in a Low cost portable LISP machine.
The MAC came up as a possible candidate. Could you please tell me more
about Portable Standard LISP (PSL) ?

We are currently considering implementing an EXPERT SYSTEM written in FORTH
which we would translate into MACFORTH. Unless an external high speed, high
capacity memory device can be utilized, the prospect of using LISP does not
look promising. Keep us informed on your progress.

                                   Ron Kushnier
                                     kushnier@nadc.arpa

------------------------------

Date: Thursday,  3-Jan-85 12:20:36-GMT
From: JOLY QMA (on ERCC DEC-10) <GCJ%edxa@ucl-cs.arpa>
Subject: Re: Fermat's Last Theorem.


Does the reference to the proof of Fermat's Last Theorem (Vol 2 # 181)
have anything to do with the incorrect proof of Arnold Arnold which
was reported in the Guardian newspaper in October/November 1984 ?

Gordon Joly

gcj@edxa

------------------------------

Date: Wed, 2 Jan 85 15:12:10 est
From: hoffman%vax1@cc.delaware (HOFFMAN)
Subject: Re:  Cognitive Science Dictionary


I think it would be a good idea and might have a good market. I would
hate to be the one doing the compiling, though.

------------------------------

Date: Thu, 3 Jan 85 13:01:25 est
From: chester%vax1@cc.delaware (CHESTER)
Subject: Re:  Cognitive Science Dictionary

A dictionary (with short definitions of terms) would have limited sales,
since it would only be useful to people who are already in the field or who
already have strong motivation to get in the field and are required to buy
it for a course.

An encyclopedia would be better, but I favor a format like that of The
Handbook of Artificial Intelligence, (Barr and Feigenbaum) or the Handbook of
Human Intelligence (Sternberg).  Such a work would appeal to people who have
a moderate interest in the field and might give them suitable orientation
and motivation to join us.

------------------------------

Date: Friday, 21 Dec 1984 18:12-PST
From: imagen!les@su-shasta.arpa
Subject: TV and the 5th generation

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

In response to your 20 Dec. comments on "Personal Assistants", I can
confirm that the TV story is apocryphal.  I bought the Heathkit
television set for the Stanford AI Lab and it was completely assembled
within a few days after arrival, by gnomes not robots.  Aside from its
use for monitoring "Mary Hartman! Mary Hartman!" it served as a
display for computer-synthesized color images.

A creative student (Hans Morovec) shortly built a remote control ray
gun that worked rather well.  As I recall, that was a few years before
remote control became available on commercial TV sets.

As for the digs at the AI community by you and others, please do not
paint everyone with the same brush.  In any research field, the
lunatic fringe is much more likely to catch headlines and certain
government grants than those who speak rationally.  The Great Machine
Translation fiasco of the '60s was brought about mainly by the CIA's
slavering desire to leap ahead in an area where no one knew how to
walk yet.

An even greater fiasco was the series of "Command and Control" systems
assembled by the Air Force and others in the '50s, '60s, and '70s.
They wanted computers to run the military establishment even though
they hadn't mastered chess yet.  The reason that these largely useless
projects kept going was that the people involved were having a good
time (and making good money) and the Congress never seemed to
understand what was going on.

As for AI and 5th generation computers, I know of very few people in
the AI community who believe in any of that nonsense.  Nevertheless,
some will use it to pry larger grants out of the government or to sell
high-priced seminars to the gullible public.

What keeps happening, it seems, is that people take a few partially-
understood facts and principles then extrapolate a few light years
away and declare that it must be possible to do this new thing.  As
long as such activities are rewarded, they will continue to
proliferate.  Why settle for a trip to the beach when you can head
toward Andromeda?

        Les Earnest

------------------------------

Date: 02 Jan 85  2300 PST
From: Richard Vistnes <RV@SU-AI.ARPA>
Subject: Reading machines & news

I seem to remember someone a while ago asking about the availability
of machines that could `read' a page of text with a camera and produce
computer-readable text.  In the latest issue of Fortune magazine
(Jan 7 '85, p.74) there's an article about speech recognition, and it
mentions that Kurzweil (formerly of MIT, I believe) let Xerox produce his
reading machine, and that this machine can read text in several
different fonts.  Maybe someone at Xerox can supply more information.

                - Richard Vistnes

------------------------------

Date: 02 Jan 85  1107 PST
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: Upcoming conference submission deadlines

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

(details in file CONFER.TXT[2,2] at SAIL.)

7-Jan-85: IJCAI-85
10-Jan-85: VLSI-85
12-Jan-85: Theoretical Approaches to Natural Language Understanding
14-Jan-85: Logics of Programs 1985
15-Jan-85: Symposium on Complexity of Approximately Solved Problems
15-Jan-85: Workshop on Environments for Programming-in-the-Large
15-Jan-85: 1985 CHAPEL HILL CONFERENCE ON VLSI
18-Jan-85: Computational Linguistics
31-Jan-85: FUNCTIONAL PROGRAMMING LANGUAGES AND COMPUTER ARCHITECTURE
31-Jan-85: Conference - Intelligent Systems and Machines
4-Feb-85: CONFERENCE ON SOFTWARE MAINTENANCE -- 1985
4-Feb-85: Sigmetrics '85
11-Mar-85: THEORETICAL AND METHODOLOGICAL ISSUES IN MACHINE TRANSLATION OF
        NATURAL LANGUAGES
1-Apr-85: Logic, language and computation meeting
29-Apr-85: FOUNDATIONS OF COMPUTER SCIENCE (FOCS)
1-May-85: Expert Systems in Government Conference

You can get the file to your computer using FTP.

------------------------------

Date: Wed 2 Jan 85 17:16:47-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Representation and Presentation  (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]

                             CSLI SEMINAR
                 ``Representation and Presentation''
             Benny Shanon, Hebrew University of Jerusalem
     Wednesday, January 9 at 4:00 pm in the Ventura conference room

A series of arguments, drawn on the basis of various aspects of
psychological phenomenology are marshalled against the representational-
computational view of mind.  The argument from context marks the
unconstrained variation of meaning with context, hence the impossibility
of a full, comprehensive semantic representation; the argument from
medium points out that medium is an ineliminable contributor to meaning
and that a variety of psychological patterns do not allow for a
distinction between medium and message, hence they cannot be accounted
for by means of abstract, symbolic representations; the argument from
development notes that the representational view not only cannot
account for the problem of the origin in cognition, but that it leads
to unnatural and even paradoxical patterns whereby what is theoretically
simple is phenomenologically complex and/or developmentally late and
what is theoretically complex is phenomenologically simple and/or
developmentally early.  On the basis of these arguments it is
suggested that cognition be viewed as a dialectic process between two
types of patterns: representational and presentational.

------------------------------

Date: 02 Jan 85  2347 PST
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: Seminar - Rewrite Rules for Functional Programming   (IBM-SJ)

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

2:00pm  Monday, Jan. 7
Room 1C-012 (in Building 28 at IBM)

Ed Wimmers
IBM Research San Jose

        What does it mean for rewrite rules to be "correct"?

We consider an operational definition for FP via rewrite rules.  What would it
mean for such a definition to be correct?  We certainly want the rewrite rules
to capture correctly our intuitions regarding the meaning of the primitive
functions.  We also want there to be enough rewrite rules to compute the correct
meaning of all expressions, but not too many, thus making equivalent two
expressions that should be different.  And what does it mean for there to be
"enough" rules?  We develop a new formal criterion for deciding whether there
are enough rewrite rules and show that our rewrite rules meet that criterion.
Our proof technique is novel in the way we use the semantic domain to guide an
assignment of types to the untyped language FP; this allows us to adopt powerful
techniques from the typed lambda-calculus theory.

Host: John Backus


------------------------------

End of AIList Digest
********************